Neuroscience

views updated May 08 2018

NEUROSCIENCE

Neuroscience is the scientific study of nervous tissue, activity, organization, systems, and interactions. It is paradigmatically interdisciplinary, currently including biophysics, organic and biochemistry, molecular through evolutionary biology, anatomy and physiology, ethology, neuropsychology, and the cognitive and information sciences. Investigators include basic scientists and clinicians. During the late twentieth century, neuroscience underwent enormous growth. Quantitative information available on the Society for Neuroscience's Web site speaks to this. Beginning in 1970 with 500 members, at last count (summer 2004) the Society boasts more than 34,000 members worldwide. More than 30,000 registrants attended the 2004 annual meeting, where more than 14,000 posters and oral presentations were delivered. There are now more than 300 graduate training programs worldwide in neuroscience. With its increasing academic influence and its obvious connection with philosophy's perennial mind-body problem, it was inevitable that philosophers would begin taking serious interest.

Academic philosophy's systematic interest might be dated to 1986, the year that Patricia Churchland's Neurophilosophy appeared. She boldly proclaimed that "nothing is more obvious than that philosophers of mind could profit from knowing at least something of what there is to know about how the brain works" (p. 4). Her book presented what was then textbook neuroscience, contextualized by developments in postlogical empiricist philosophy of science. It set the stage for much neurophilosophy and philosophy of neuroscience that followed, especially the branch of neuroscience that philosophers attended to (cognitive neuroscience). This entry will present some neuroscientific techniques and results that have attracted philosophers' attention. In the interest of pedagogy, the emphasis will be on the scientific details. It will close with a section describing another field of contemporary neuroscience that unfortunately has captured less philosophical attention, followed by a more detailed discussion of implications for mind-brain reductionism. Space limitations preclude a comprehensive survey and the bibliography is limited, both in number of entries and primarily to textbook sources and review articles (all containing extensive references to the primary scientific literature, however). This is befitting an encyclopedia entry, but philosophers who are interested in acquiring a serious understanding of actual neuroscience are urged not to stop with these sources. There is no shortcut around delving into the primary literature. Superficial neuroscience still serves too often in straw arguments in the philosophy of mind.

Ideally this entry would also include work on pain processing, especially on the two types of pain circuits (rapidly conducting Ad and slowly conducting C fibers) and the different pain qualities carried by each; the neural mechanisms of dream sleep, especially endogenously produced activity in sensory regions; the discovery of mirror neurons in primate brains that are active when the subject performs a specific motor task and when the subject observes a cohort performing that task; the sea change in computational neuroscience during the 1990s, away from abstract network modeling (inspired by early successes of "connectionist" artificial intelligence) and toward compartmental modeling, where the patch of neural membrane and its ion-specific conductance capacities become the basic units of analysis; and the neurobiology and behavioral genetics of schizophrenia (as elaborated in numerous publications by Kenneth Schaffner). Philosophers have argued for implications from each. But choices were necessary.

Functional Neuroimaging

Functional neuroimaging provides a window into the active, healthy brain. Results from two imaging techniques have dominated philosophers' attention: positron emission tomography (PET) and functional magnetic resonance imaging (fMRI). PET is based on radioactive decay of positrons (positively charged electrons). Subjects are injected with water (or sugars) labeled with a radioactive, positron-emitting isotope (such as oxygen-15, whose nuclei are manufactured to contain the normal eight protons but only seven neutrons). During the minute following injection, radioactive water accumulates in biological tissues in amounts directly proportional to local blood flow. Positrons leave the nuclei of the unstable, radioactive atoms and travel only a short distance through biological tissue (at most a few millimeters).

After losing their kinetic energy positrons are attracted to negatively charged electrons. This collision annihilates both and the resulting energy manifests in two photons traveling 180o away from the annihilation site. These photons exit the tissue being imaged and are detected by radiation detectors arranged in coincidence circuits (the "PET camera"). Photons arriving simultaneously at opposing detectors are counted and these counts are converted into an image that reflects the relative number of annihilation collisions localized to a given region. A single ring of coincident detectors can only image a single "slice" through the tissue; but modern PET cameras contain multiple rings and so can image multiple parallel "slices" simultaneously. Powerful algorithms and computer graphics can reconstruct the functional images in any desired orientation. Color codes are typically used to denote intensity of activity.

By subtracting images generated during a carefully selected control task from those generated during an experimental task, PET generates a picture of the location and intensity of activity specific to performing the experimental task. These are the colorful images published in PET studies. But what PET measures directly is localized blood flow to a small region of biological tissue. The activity interpretation exploits the known (and independently verified) positive correlation between increased local blood flow and increased cellular activity in that region.

fMRImore precisely, Blood Oxygenation Level Dependent (BOLD-) fMRIalso exploits the established correlation between localized blood flow changes and cellular activity in tiny neural regions. But to measure these changes, it takes advantage of the different properties of oxygen-bearing and deoxygenated hemoglobin in a strong magnetic field. Oxygenated hemoglobin is more prevalent in the bloodstream in regions of high cellular activity. The metabolic demands of highly active neurons and glial cells generate signals to blood vessels to increase blood flow to the region (the "hemodynamic response"). The resulting supply exceeds the cells' capacity to remove oxygen from hemoglobin. As of 2004, these different magnetic properties can be measured and localized in fMRI scanners approved for human use to less than one millimeter. Stronger magnetic fields generate more precise measurements and localizations. Algorithms and graphics capabilities comparable to PET technology reconstruct "slices" through the imaged tissue at any desired orientation. By normalizing and contrasting BOLD signals across experimental and carefully selected control tasks, experimenters can image activity location and intensity specific to the experimental task. A variety of postprocessing techniques are employed to account for the potentially variable hemodynamic delays between neural activity generated by task performance and increased blood flow.

A handful of functional neuroimaging studies (mostly older ones from the early days of PET!) recur in philosophical discussions. (All of the studies discussed below are also discussed and referenced in Michael Posner and Marcus Raichle's popular book, Images of Mind, 1997.) One still sees reference to Per Roland and his colleagues' regional cerebral blood flow studies from the mid-1980s. Their subjects performed a number of cognitive tasks, including verbalizations, arithmetical calculations, and a complicated memory imagery task involving walking familiar streets and making a system of turns while reporting landmarks visualized along the way. The memory imaging task produced increased blood flow bilaterally to regions in the parietal and temporal lobesregions that lesion data from human neurological patients had previously revealed to be involved in mental imagery. Stephen Kosslyn's work on mental imagery using neuroimaging techniques, especially work reported in his Image and Brain (1994), is also discussed often by philosophers in debates about the structure of cognitive representations. Much of Kosslyn's work demonstrates that the same neural regions are activated when subjects form a visual mental image and when they visually perceive a similar stimulus. He has demonstrated these effects as far back in the visual processing pathways as primary visual cortex (V1). They hold for locations containing neurons known to specialize for the size of perceived stimuli and for stimuli viewed from typical or atypical perspectives.

Much philosophical attention on functional neuroimaging focuses on its implications for localization hypotheses of cognitive functions. Steve Petersen and his colleagues' studies on language processing and use from the late 1980s are still cited and discussed. They employed PET and a hierarchical experimental design that enabled them to separate activations generated by passively viewing words, passively listening to words, speaking words viewed or heard, and generating semantically related words to those viewed or heard. Different tasks in this hierarchy produced PET activation increases in different neural regions, suggesting to some the localization of different tasks involved in language processing, including word perception, speech production, and semantic access. Localization arguments and their scientific grounding in functional neuroimaging studies have been challenged, notably by William Uttal in The New Phrenology (2001).

A handful of functional neuroimaging studies on attention rose to philosophical prominence with growing interest in consciousness. A popular example uses the Stroop task to induce conflict. Color words are presented visually in either compatible or incompatible print colors (e.g., compatible: "red" printed in red; incompatible: "red" printed in green). Subjects are asked to name the color of the print. Behaviorally, as measured by errors and response time, subjects find incompatible conditions much harder. Some psychologists have argued that incompatible conditions require conscious effort to inhibit saying the color word. José Pardo and his colleagues in the early 1990s found strong activation effects specific to the (forebrain) anterior cingulate gyrus when compatible PET activation results were subtracted from incompatible ones. These results are consistent with behavioral data from patients with anterior cingulate lesions and lend empirical support to earlier speculations about the neural components of an executive atttentional control network.

Clinical Neuropsychology and Neurology

Philosophers have long taken interest in the behavioral effects of brain damage and disease. (Bryan Kolb and Ian Whishaw's Fundamentals of Human Neuropsychology, 2003, is an excellent textbook that includes discussions of topics covered in this section and extensive references to the primary scientific literature.) Commissurotomy ("split brain" surgery) is one contemporary example. To treat otherwise intractable epilepsy, neurosurgeons in the early 1960s revived a surgical technique of cutting a patient's corpus callosum. The corpus callosum is a huge bundle of axon fibers that connect homologous regions of the left and right cortical hemispheres.

The procedure was clinically successful with a minimum of apparent behavioral effects, until Roger Sperry and his collaborators (Michael Gazzaniga, Joseph Bogen) applied more sophisticated tests. They discovered that these patients had lost the capacity of their two cerebral hemispheres to communicate directly with each other. Owing to the segregation and crossing of axon projections from sensory receptor organs to relay neurons in the thalamus and sensory cortex, experimenters could direct, for example, different visual stimuli to the left and right cortical hemispheres. If one then asked the subject to pick up an object related to the visual display with his or her left hand, the subject would pick up an object related to the visual display in his or her right hemisphere. (As with sensation, the motor system also crosses over: Right motor cortex controls left side movement and vice versa.) If one then asked that subject to explain verbally why he or she was holding that object (and the subject was among the roughly 85 percent of humans with speech localized to the left hemisphere), the subject indicated no awareness in his or her verbal response of the display presented to the right visual hemisphere and instead confabulated a verbal account that related the chosen object to the left hemisphere's visual display. The variety and number of similar results led to speculations about two seats of conscious awareness and control in a single human brain, and subsequent philosophical reflections about the unity of self (or lack thereof).

Blindsight refers to preserved visual capacities following damage to visual cortex. Such damage produces a scotoma (a "blind spot") at circumscribed locations in the patient's visual field. Despite no conscious awareness of visual stimuli presented there, these patients nevertheless display some impressive visual abilities when prompted to guess about stimuli presented in their scotoma, including pointing accurately to visual stimulus location, detecting movement, and discriminating shapes (and in a few cases, colors). Their performances far exceed chance. As reviewed in Lawrence Weiskrantz's Consciousness Lost and Found (1998), experimental work over the past three decades has mostly confirmed early results and has introduced controls to address methodological criticisms of the early studies. Blindsight has figured into philosophical discussions of the nature of visual consciousness and the location of its neurobiological mechanisms, as well as epistemological discussions about accurate perceptual judgments and the purported necessity of awareness.

Denial symptoms are the opposite of blindsight. Blindness denial (Anton's syndrome) can result from cortically induced blindness and renders patients functionally blind by all objective tests and measures; yet these patients vehemently claim that they can see. Paralysis denial can result from damage to motor cortex and renders patients functionally paralyzed on the side of their bodies opposite the damage; yet these patients vehemently deny that they are paralyzed. Many patients generate spontaneous confabulations (e.g., "it is dark in this room," "I have bad arthritis in my left shoulderit hurts to move my left arm") to explain their failures on simple behavioral measures. Numerous controls are standard in neurological assessment to rule out cases of confusion or persistent stubbornness to accept or admit the deficit. Some philosophers and neurologists have argued from these clinical details toward revisions of our commonsense conceptions of awareness, conscious control, and the initiation of behavior. Vilayanur Ramachandran and Susan Blakeslee's popular book, Phantoms of the Brain (1998), is a good example, with elaborate discussions of clinical cases and a good bibliography to primary sources.

Contralateral neglect ("hemineglect") is a condition whereby patients ignore the side of their body and the world opposite the side of damage to parietal cortex. (Typically the damage is to right hemisphere, producing left side neglect.) The neglect invades all sensory modalities, is sometimes accompanied by denial and confabulation (to the point of patients denying that their neglected limbs even belong to them), and even invades memories and images. A famous study from the late 1970s by neurologist Edoardo Bisiach and his colleagues asked recent stroke patients demonstrating neglect symptoms to remember a famous square in Milan from one vantage point and to describe all objects they remembered. They were then asked to visualize the square from the opposite vantage point and describe the objects remembered. In both cases, they described objects only on their nonneglected sidesmeaning that they described a different set of objects from the separate vantage points. Hemineglect appears to be an awareness deficit. If the only available objects for patients to attend are on the neglected side, they can attend to them. But when objects are present on the nonneglected side, they seem to lose all awareness of the opposite space. Philosophers working on consciousness, awareness, their brain mechanisms, and on body awareness and body-in-space representations have appealed to neglect data.

The Binding Problem

Conscious experiences are present to us as unified wholes. Visual object perception provides rich examples. In ordinary circumstances I see a football zooming toward me, not separately brown color, oblong shape, in motion (speed, trajectory) toward me. Yet each of these visual qualities is extracted by neuronal activity in spatially separated areas. Separate neural pathways respond to qualities that characterize a perceived object's identity (the ventral or "what" stream through inferior temporal cortex) and its location, motion, and my actions toward it (the dorsal or "where/how" stream through posterior parietal cortex). Neurons specialized for specific aspects of the visual stimulus are at distinct locations within each pathway. Seeing an object requires neuronal activity in spatially separated regions and there is no evidence for "grandmother" neurons further downstream onto which all of these active neurons project. This is the "binding problem." How is activity in these spatially separated regions bound together to become active as a unit and so produce a unified visual percept? And given that an object seen is often also heard, felt, or smelled simultaneously, and that these multimodal perceptual experiences are also unified in conscious experience, we actually confront a set of binding problems. (Neuropsychologist Ann Treisman's 1996 review article is an excellent introduction.)

Throughout the 1990s a variety of "temporal synchronicity" solutions were popular. These held that binding results from induced synchronous activity in specific neurons in the separate pathways and processing areas. The discovery of a robust "40 Hz oscillation pattern" across the mammalian cortex during wakeful attention and rapid eye movement (REM, "dreaming") sleep inspired this approach. Feedforward and reciprocal feedback anatomical projections between sensory modality-specific and nonspecific neuron clusters ("nuclei") in the thalamus and sensory cortex provided a biologically plausible hypothesis for how temporal synchronicity might be induced.

However, problems quickly surfaced. It is notoriously difficult to determine the "binding window," the time interval during which the spatially separated processing must occur. Are mechanisms sensitive to temporally coherent discharges tied to the full length of activated neuronal discharges, making the binding window up to several hundred milliseconds? If so, then because distinct and changing stimuli clutter the visual field continuously over this long an interval, how do we successfully bind together the right combination of features? Is activity onset or rise time of discharge the relevant temporal feature? If so, this leads to difficulties when we consider the variable latencies of activity in different areas of modality-specific sensory pathways. Latency differences exist all the way back to activity in sensory receptor cells: Hair cells at different locations on the cochlea and photoreceptors at different locations on the retina respond at slightly different times to a single auditory or visual stimulus. Moving up both auditory and visual processing streams, the temporal differences at which information about different aspects of a single stimulus reaches later points can be tens of milliseconds. Somehow, a temporal synchronicity binding mechanism must compute these processing time differences. (The problem of latency differences is exacerbated when we consider multimodalfor example, visual-auditory binding mechanisms.)

These biological details suggest the need for neural regions where temporal information converges (to carry out the latency computations); but now temporal synchronicity solutions confront a similar problem to the one that sunk purely spatial solutionsno solid evidence for such convergence sites. Temporal synchronicity solutions are less popular now. But the binding problem continues to attract philosophers' attention due to its obvious connections with consciousness and brain mechanisms. Rodolfo Llinás and Patricia Churchland's The Mind-Brain Continuum (1996) is a good edited volume that was published at the time that these debates about binding and temporal synchronicity were raging.

Molecular and Cellular Cognition

The reader might have noticed that most examples of neuroscientific work that has attracted philosophers' attention are dated. This is not necessarily a bad thing. Philosophical reflection on scientific results depends on their scientific credibility and that takes time to establish. However, this limitation risks missing important new developments and changing foundational assumptions in a rapidly developing science. The lessons philosophers draw might then be dated as well. There is evidence that "foundational" change has occurred recently in neuroscience, having to do with the increasing impact of molecular biology.

More than a decade ago neurobiologists Eric Kandel, James Schwartz, and Thomas Jessell, in the third edition of their textbook, Principles of Neural Science (1991), wrote that "the goal of neural science is to understand the mind: how we perceive, move, think, and remember. In the previous editions of this book, we stressed that important aspects of behavior could be explained at the level of individual nerve cells. Now it is possible to address these questions directly on the molecular level" (p. xii). With the publication of the text's fourth edition (2000), and after another decade of cellular and molecular investigations, these same authors announce mind-to-molecules "linkages" as accomplished scientific results:

This book describes how neural science is attempting to link molecules to mindhow proteins responsible for the activities of individual nerve cells are related to the complexity of neural processes. Today it is possible to link the molecular dynamics of individual nerve cells to representations of perceptual and motor acts in the brain and to relate these internal mechanisms to observable behavior. (p. 34)

These are heady claims, backed up by more than 1,400 pages of textbook evidence drawn from a huge scientific literature. Yet to read much philosophical discussion of neuroscience, one would not even know that this work and attitude existsmuch less that it constitutes the current mainstream of the discipline. (This mountain of supporting evidence also refutes the pitying lament so often uttered by philosophers and cognitive scientists: "If we only knew more about how the brain works " We do.)

Much of this research is congealing around a field dubbed "molecular and cellular cognition." According to the Molecular and Cellular Cognition Society's Web site, the field's stated goal is to discover "explanations of cognitive processes that integrate molecular, cellular, and behavioral mechanisms, literally bridging genes and cognition." The field emerged in the early 1990s, after gene engineering techniques were introduced into mammalian neurobiology to generate knockout and transgenic rodents for behavioral studies. Memory has been a principal research focus, with an emphasis on consolidation (the transformation of labile, easily disrupted short-term memories into stable, enduring long-term forms) and on hippocampus-based memories that neuropsychologists call "declarative" or "explicit." This field's methodology is ruthlessly reductive. Its basic experimental strategy is to intervene into cellular or intracellular molecular pathways and then track their effects in the behaving animal using standard tests borrowed from experimental psychology for the phenomenon under investigation. (So despite the new molecular-genetic techniques for intervening directly at increasingly lower levels of biological processes, the basic experimental logic remains interestingly similar to that of classical lesioning and pharmacological studies.)

At last count, more than sixty molecules have been implicated in the molecular mechanisms of mammalian long-term potentiation (LTP), an activity-dependent form of synaptic plasticity with memorylike features. However, a few figure prominently and have been targets of bioengineered mutations and subsequent behavioral study in declarative memory consolidation tasks. Cyclic adenosine monophosphate (cAMP) is a product of adenosine triphosphate (ATP) conversion into energy to drive cellular metabolism and activity. cAMP is the classic "second messenger" of molecular biology, functioning as an intracellular signal for effects elsewhere in the cell. When available in high quantities in active neurons it binds to the regulatory subunits of protein kinase A (PKA) molecules, freeing the catalytic PKA subunits. In high enough quantities, the latter translocate back to the neuron's nucleus, where they phosphorylate cAMP response element binding proteins (CREB), a family of gene transcriptional enhancers and repressions that turn on or inhibit new gene expression and protein synthesis.

Specific targets of phosphorylated CREB transcriptional enhancers include genes coding for regulatory proteins that keep PKA molecules in their active state and effector proteins that resculpt the structure of active synapses, keeping those synapses potentiated to pre-synaptic activity for days to weeks. Numerous features of LTP have made it an attractive theoretical mechanism for memory consolidation for years; results from molecular and cellular cognition have finally lent experimental backing to this decades-old speculation.

Alcino Silva's group has used mice with a targeted mutation of the CREB gene on a variety of short- and long-term memory tasks, including the Morris water maze task, a combined environment-conditioned stimulus fear conditioning task, and a social recognition memory task. These mice do not synthesize the CREB molecules required for long-lasting "late" LTP (L-LTP), although they have all the molecules necessary for shorter-lasting "early" LTP (E-LTP). Eric Kandel's group has developed PKA regulatory subunit transgenic mice that overexpress those molecules in specific neural regions. When activity-driven cAMP molecules release PKA catalytic subunits, an abundance of regulatory subunits are available to block PKA catalytic subunit translocation to the neuron's nucleus (in the regions of the brain where the transgene is expressed). This effect halts the gene expression and protein synthesis necessary for L-LTP. If the molecular mechanisms of L-LTP are those of memory consolidation, then Silva's CREB enhancer mutants and Kandel's PKA regulatory transgenics should be intact in short-term memory tasks but impaired in their long-term form. These are exactly their published experimental results. Kandel's results are especially compelling because the transgenic mice acquire long-term memories on tasks that involve activity in brain regions where the transgene is not expressedtasks they learn simultaneously with the long-term memory tasks on which they fail. This suggests that the deficit is not sensory, motor, or attentional, but instead is specific to memory consolidation.

New results from molecular and cellular cognition are reported in virtually every issue of journals such as Cell, Neuron, Journal of Neuroscience, Journal of Neurophysiology, and Nature Neuroscience. However, they have yet to creep into philosophical awareness. This is unfortunate for at least two reasons. First, this is mainstream neuroscience at the turn of the twenty-first century, employing techniques common to the bulk of the discipline's practitioners (especially compared to the number of cognitive neuroscientists). Second, this work is reductionistic, especially compared to higher-level neuroscience. Philosophers who limit their attention to the latter not only come away with a mistaken impression of what constitutes state-of-the-art neuroscience; they also miss the reductionist attitude that informs the mainstream. This carries problems especially for philosophy of mind. These implications are serious enough to motivate fuller discussion in the final section.

Philosophical Implications: Reduction Revisited

When presenting important neuroscientific findings above, some philosophical implications were mentioned. In this final section, implications for reductionism will be discussed in more detail. Philosophical attention to neuroscience began with this concern. Reduction occupied an entire chapter in Patricia Churchland's ground breaking Neurophilosophy (1986). Other concerns emerged as philosophers engaged neuroscience, but reduction remains central to neurophilosophyas witnessed by its prominent treatment in the first single-authored, introductory neurophilosophy textbook (Churchland 2002). Unfortunately, the term "reduction" is less univocal than it once was, and its philosophical treatments and discussions remain frustratingly abstract and distant from actual scientific practice. These features cast suspicion on assessments of psychoneural reductionism's philosophical potential. Might closer attention to mainstream (cellular and molecular) neuroscience rectify this?

Philosophical discussions of reduction were clearest and most fruitful when intertheoretic reduction was their explicit concern. This treatment goes back most prominently to Ernest Nagel's classic The Structure of Science (1961, ch. 11). According to Nagel, reduction is deductionof the reduced theory, characterized syntactically as a set of propositions, with the reducing theory serving as premises. In interesting scientific reductions, the reduced theory contains descriptive terms that don't occur in the reducing, so the premises of the derivation must also contain bridging principles or correspondence rules. Typically these principles were treated as material biconditionals (although Nagel explicitly permitted material conditionals) containing terms from the two theoretical vocabularies. In interesting scientific cases, the reducing theory also often corrects the reduced. On Nagel's account, this feature is handled by introducing premises expressing counterfactual limiting assumptions and boundary conditions on the application of the reducing theory.

Both of these features came under serious philosophical criticism, many of which resulted from attempts by philosophers to apply Nagel's account to increasingly better described cases from the history of science (including classical equilibrium thermodynamics to statistical mechanics and the kinetic theory of gases, Nagel's own detailed example). Led by Thomas Kuhn, Paul Feyerabend, Kenneth Schaffner, Lawrence Sklar, Robert Causey, and Clifford Hooker, philosophers of science proposed alternatives to Nagel's conditions. Patrick Suppes even proposed scraping the entire syntactical view of theories and replacing it with a semantic viewtheories as sets of models sharing set-theoretic or category-theoretic features. Intertheoretic reduction then turns into a mapping of these sets into one another in light of a variety of constraints and conditions.

One problem with applying these detailed accounts from the philosophy of science to philosophy of mind is that neither neuroscience nor psychology seems to provide robust enough theories. Most theories of intertheoretic reduction require a complete account of lower level phenomena in terms of laws, generalizations, or their model-theoretic counterparts. But even in the best cellular and molecular neuroscience, as in cell and molecular biology in general, few (if any) explanations are framed in terms of laws or generalizations. Many interactions are known to occur with predictable regularity and have both theoretical and experimental justification; but biochemistry hasn't even provided molecular biology with a general (and hence generalization-governed) account of how proteins assume their tertiary configurations. Molecular biologists know much about how specific molecules interact in specific contexts, but few explanatory generalizations are found in experimental reports, review articles, or textbooks; and the few that are found do not by themselves yield extensive predictions or explanations of lower level interactions. Finally, real molecular neuroscience does not provide what some law-based accounts of scientific theory structure require. Its explanations do not specify how molecular biological entities interact in all possible circumstances. In light of these mismatches, intertheoretic reduction looks like a naive account of actual scientific practice.

Furthermore, its philosophical successor, functional reductive explanation, fares no better. According to this view, whose prominent advocates include Jaegwon Kim, David Chalmers, and Joseph Levine, a reductive explanation of a higher-level phenomenon is a two-step process. Step 1 requires a functional characterization of the phenomenon, in terms of its principal causes and effects. Step 2 involves the empirical, scientific search for the lower level processes, events, or mechanisms that realize this functional characterization. The reductive explanation of water by aggregates of H2O molecules is a commonly cited example. Scientists characterize the causal roles of water and its basic properties, like its boiling point at sea level; and empirical research reveals that aggregates of H2O molecules, with their physical and chemical properties and dynamics, provide the underlying mechanisms for those causes and effects. (This account of reductive explanation is often employed by critics of mind-brain reductionism. Many philosophical champions of the qualitative features of consciousness insist that no reductive explanation of them should be expected, because any attempt to functionalize these features will fail to capture their qualitative essence. Hence Step 1 of their potential reductive explanation cannot be achieved.)

It is not illuminatingquite the reverse, in factto force the actual details of state-of-the-art "molecular and cellular cognition" into this format. No procedures that typically occur in these experiments are serious candidates for Step 1 functionalization. And the empirical searches for mechanisms typically focus on finding specific divergences from control group behavior in experimental protocols that are commonly used to study the cognitive phenomenon whose neurobiological reduction is at issue. The key step in these experiments is the intervention step, where techniques of cell and molecular biology are used to manipulate increasingly lower levels of biological organization in living, behaving organisms. Animals receiving the interventionbe it cellular, pharmacological, or a bioengineered mutationare compared to control animals on a variety of behavioral tests to find specific, narrow behavioral deficits.

These experiments are designed to leave most behaviors intact. For only then do experimenters claim to have found a "reduction," an "explanation," or a "mechanism" of cognition. To force this experimental practice into the common philosophical model of functional reductive explanation occludes the subtlety of choosing which cellular or molecular pathways to intervene into, the exquisiteness of the invention techniques employed, and the specificity of the measured behavioral effects when these experiments are successful. Good philosophical accounts of a scientific practice should illuminate, not obscure, these types of features-in-practice. Any consequences drawn about "psychoneural reduction" from an account that obscures them should be treated with suspicion.

This problem is beginning to look like one of imposing borrowed philosophical ideals onto actual scientific practice. Based on prior epistemological or metaphysical commitments, many philosophers approach the neuroscientific literature with preconceptions about "what reduction has to be." When they fail to find their relation obtaining, they either deny that psychoneural reduction is on offer or redescribe actual cases so that these at least approximate it. Both responses are objectionable. The first drives philosophy of mind continuously farther away from mainstream neuroscience, which grew increasingly reductionistic in the last two decades of the twentieth century. The second keeps borrowed philosophical ideals alive when their actual value grows increasingly questionable, and engenders criticisms of "reductionism" based on "better knowledge of the actual scientific details." A better approach within the philosophy of neuroscience might be to articulate the actual practices of reductionistic neuroscientiststhe ones whose work contributes to the "mind-to-molecular-pathways-linkages" expressed in the quote cited above by Kandel, Schwartz, and Jessell. The result will be an account of real reduction in real reductionist neuroscience. One could then ask the different question of whether these practices and their results serve the philosophical purposes that reductionism claimed to serve.

It is still too early in this metascientific investigation to know the answer to the last question. But careful examination of the experimental work described toward the end of the previous section above shows that the dominant reductionistic methodology involves intervening into cellular or molecular processes and then tracking the behavioral effects in the living animal using standard tests drawn from experimental psychology. Often much in vitro experimental work must be done first to discover where these interventions are best placed and which intervention techniques are best suited for the task. Cellular physiology still contributes intervention techniques such as cortical microstimulation; pharmacology still contributes a variety of drugs and delivery systems. During the last decade of the twentieth century, transcranial magnetic stimulation developed more precise techniques for delivering a circumscribed magnetic field to increasingly precise neuronal targets. And molecular biology and biotechnology provided powerful techniques for gene manipulations, enabling experimenters to develop targeted gene knockouts and to insert transgenes to inhibit or exacerbate specific protein synthesis. Attached to appropriate promoter regions (base pair sequences in the genetic material that control the onset of gene expression), transgenic expression and subsequent protein synthesis can be limited to increasingly localized neuron populations.

Armed with these cellular and molecular intervention techniques, and coupled with detailed neuroanatomical knowledge about cell circuits leading ultimately to motor neurons and the muscle fibers they innervate, neuroscientists can make increasingly accurate predictions of behavioral effects on a variety of experimental tasks. Successful experimental results yield the conclusion that the specific cognitive phenomenon, "operationalized" using the behavioral tests employed, reduces to the cellular or molecular processes intervened into, within the neurons comprising the circuits leading ultimately to the musculature. Appeals to "higher level" neuroscientific concepts and resources no longer appear in the resulting explanations. One reads in this scientific literature about contributions to "a molecular biology of cognition, to "bridges linking genes and behavior," and to explanations "of cognitive processes that integrate molecular, cellular and behavioral mechanisms." Within "molecular and cellular cognition," resources from cognitive neuroscience play essential heuristic roles. But once they have served their purposes to yield new "intervene molecularly and track behaviorally" results, they fall away from the discipline's best available account of cognition's neural mechanisms. Philosophers (and many cognitive scientists) might not recognize these scientific practices and results, but that reaction reflects nothing more than their lack of familiarity with ongoing neuroscientific practice. This methodology is central to mainstream reductionistic neuroscience at the turn of the twentieth century. If one wishes to rail against "psychoneural reductionism," one should at least rail against the actual practices and results of real reductionistic neurosciencenot against preconceived assumptions about what those practices and results "have to be."

This final point raises the intriguing question of whether neuroscience as a whole is univocal about the nature of reduction. More than likely it is not. Midway through the first decade of the twenty-first century, neuroscience is a remarkable interdisciplinary melding of different experimental techniques, methodological hunches, and interpretive assumptions. Molecular biology revolutionized the discipline in the late twentieth century, but so did new tools for functional brain imaging. Dynamical systems mathematics, applied initially to analyze artificial neural networks, provided fruitful new formal resources. Neuroscience's traditional core disciplines, neuroanatomy and electrophysiology, have enjoyed continual refinement. Rigorous neurological and neuropsychological assessment continue to develop. With so many questions being pursuedand philosophers would do well to compare attendance at their annual professional meetings with the more than 30,000 registrants at the 2004 Society for Neuroscience annual meetingand so many techniques pitched at so many different levels of brain organization, it would be astonishing if "reduction"s meant the same thing across this discipline. Perhaps disagreements within philosophy about the neuroscientific plausibility of "psychoneural reduction" result more from philosophers latching onto different uses of this notion across neuroscience, rather than from ignorance or mistaken analysis. Sorting through these notions and discovering which neuroscientific practices employ each is one way that philosophers could contribute to ongoing neuroscientific development, instead of serving as mere sideline spectators or "science journalists."

See also Kim, Jaegwon; Kuhn, Thomas; Memory; Mind-Body Problem; Nagel, Ernest; Philosophy of Biology; Philosophy of Mind; Reductionism in the Philosophy of Mind.

Bibliography

Churchland, Patricia. Brain-Wise: Studies in Neurophilosophy. Cambridge, MA: MIT Press, 2002.

Churchland, Patricia. Neurophilosophy: Toward a Unified Science of the Mind-Brain. Cambridge, MA: MIT Press, 1986.

Kandel, Eric R., James S. Schwartz, and Thomas J. Jessell, eds. Principles of Neural Science. 3rd ed. New York: McGraw-Hill, 1991.

Kandel, Eric R., James S. Schwartz, and Thomas J. Jessell, eds. Principles of Neural Science. 4th ed. New York: McGraw-Hill, 2000.

Kolb, Bryan, and Ian Q. Whishaw. Fundamentals of Human Neuropsychology. 5th ed. New York: Bedford, Freeman, Worth, 2003.

Kosslyn, Stephen M. Image and Brain: The Resolution of the Imagery Debate. Cambridge, MA: MIT Press, 1994.

Llinás. Rodolfo, and Patricia Churchland, eds. The Mind-Brain Continuum: Sensory Processes. Cambridge, MA: MIT Press, 1996.

Molecular and Cellular Cognition Society. Available from http://www.silvalab.com/approachesf0.htm.

Nagel, Ernest. The Structure of Science: Problems in the Logic of Scientific Explanation. New York: Harcourt, Brace and World, 1961.

Posner, Michael I., and Marcus E. Raichle. Images of Mind. New York: Scientific American Library, 1997.

Ramachandran, Vilayanur S., Susan Blakeslee. Phantoms in the Brain: Probing the Mysteries of the Human Mind. New York: William Morrow, 1998.

Society for Neuroscience. Available from http://apu.sfn.org/.

Treisman, Ann. "The Binding Problem." Current Opinion in Neurobiology 6 (1996): 171178.

Uttal, William R. The New Phrenology: The Limits of Localizing Cognitive Processes in the Brain. Cambridge, MA: MIT Press, 2001.

Weiskrantz, Lawrence. Consciousness Lost and Found: A Neuropsychological Exploration. New York: Oxford University Press, 1998.

John Bickle (2005)

Neuroscience

views updated Jun 27 2018

Neuroscience

METHODS OF NEUROSCIENCE

APPLICATIONS OF NEUROSCIENCE TO THE SOCIAL SCIENCES

BIBLIOGRAPHY

The field of neuroscience reflects the interdisciplinary effort to understand the structure, function, physiology, biology, biochemistry, and pathology of the nervous system. From a social science perspective, however, neuroscience primarily refers to the study of the brain. Of interest is how the brain gives rise to learning, cognition, and behavior. The research of neuroscientists crosses many levels of analysis, ranging from molecular studies of genes to the study of social and ethical behavior. Within psychology, for example, behavioral neuroscientists use animal models to gain a better understanding of how genetic and brain processes influence behavior. Since the late 1980s, there has been a dramatic rise in the field of cognitive neuroscience, which combines cognitive psychology, neurology, and neuroscience to examine how brain activity gives rise to cognitive abilities (for example, memory, emotion, attention, language, consciousness).

Most recently, social neuroscience is an emerging field that uses the methods of neuroscience to understand how the brain processes social information. It involves scholars from widely diverse areas (for instance, social psychology, neuroscience, philosophy, anthropology, economics, sociology) working together and across levels of analysis to understand fundamental questions about human social nature. Social neuroscience merges evolutionary theory, experimental social cognition, and neuroscience to elucidate the neural mechanisms that support social behavior. From this perspective, just as there are dedicated brain mechanisms for breathing, walking, and talking, the brain has evolved specialized mechanisms for processing information about the social world, including the ability to know ones self, to know how one responds to another, and to regulate actions in order to coexist with other members of society. The problems that are studied by social neuroscience have been of central interest to social scientists for decades, but the methods and theories that are used reflect recent discoveries in neuroscience. Although in its infancy, there has been rapid progress in identifying the neural basis of many social behaviors (for reviews, see Adolphs 2003; Heatherton et al. 2004).

METHODS OF NEUROSCIENCE

The principles of how cells operate in the brain to influence behavior have been studied with great progress for more than a century, but it is only since the late 1980s that researchers have been able to study the working brain as it performs its vital mental functions. Brain activity is associated with changes in the flow of blood as it carries oxygen and nutrients to active brain regions. Brain imaging methods track this flow of blood to understand which areas of the brain are most active for a given task. Positron emission tomography (PET), the first imaging method developed, involves a computerized reconstruction of the brains metabolic activity by using a relatively harmless radioactive substance that is injected into the blood stream. A PET scanner detects this radiation and therefore can be used to map out brain activity in real time, which is a direct measure of blood flow. The use of radioactive substances, however, places an inherent limitation on the number of trials that can be conducted in a PET study, a potential limitation that is not present in functional magnetic resonance imaging (fMRI). Like PET, fMRI measures brain activity, but it is noninvasive (that is, nothing is injected into the blood stream). The fMRI process employs a strong magnetic field to assess changes in the oxygen level of blood at particular sites after they have become active, which is an indirect measure of blood flow.

Another set of techniques for assessing brain activity involves measuring electrical activity in the brain using an electroencephalogram (EEG). As a measure of specific mental states, an EEG is limited because the recordings reflect all brain activity and therefore are too noisy to isolate specific responses to particular stimuli. A more powerful method involves averaging together many trials and measuring the brain activity evoked for the brief periods of time following the start of the trial, resulting in measurements known as event-related potentials (ERPs). ERPs have proven to be especially useful for assessing the time course of cognitive processes, such as which aspects of a stimulus are processed first. A related method, magnetoencephalography (MEG), measures magnetic fields produced by the electrical activity of the brain. Both EEG and MEG provide excellent temporal resolution (that is, timing), yet limited spatial resolution (that is, the precise location of the activation). Brain imaging methods, such as fMRI and PET, provide much better spatial resolution than EEG or MEG, but at the cost of temporal resolution (that is, blood flow changes occur over several seconds following brain activity).

APPLICATIONS OF NEUROSCIENCE TO THE SOCIAL SCIENCES

The use of brain imaging techniques has allowed scientists to discover a great deal regarding the neural correlates of mental activity. For instance, cognitive neuroscientists have used these methods to better understand which brain regions are involved in memory, attention, visual perception, language, and many other psychological processes. More recently, neuroscience methods have been used to study topics of interest across the social sciences, such as political attitudes and decision-making, moral judgments, cooperation and competition, behavioral economics, addiction, and social cognition. For example, social psychologists have long been interested in understanding whether stereotypes reflect automatic (unconscious) or controlled (conscious) processes. Thus, social neuroscientists have begun to examine how various brain regions respond when people are making judgments of other people from various racial groups or people who possess various forms of stigma (such as status, class, disfigurement). A common finding is greater activity in the amygdala (a brain region associated with fear-based emotional responding) in response to observing faces of different races for people high in racial bias than for people low in racial bias (Eberhardt 2005).

Research within social neuroscience has often focused on the trade-off between primitive emotional responses and higher-level cognitive control. The latter reflects unique human capacities for self-reflection, understanding the minds of others, and engaging in self-control; each of these capacities ultimately depends on the intact functioning of the frontal lobes. It is likely that the methods of neuroscience will expand throughout the social sciences to address questions at a new level of analysis. As such, these methods can augment the traditional methods used to understand social behavior.

SEE ALSO Altruism; Depression, Psychological; Dopamine; Drugs of Abuse; Generosity/Selfishness; Hallucinogens; Happiness; Memory; Neuroeconomics; Semantic Memory; Sex and Mating

BIBLIOGRAPHY

Adolphs, Ralph. 2003. Cognitive Neuroscience of Human Social Behaviour. Nature Reviews Neuroscience 4 (3): 165178.

Eberhardt, Jennifer L. 2005. Imaging Race. American Psychologist 60 (2): 181190.

Heatherton, Todd F., C. Neil Macrae, and William M. Kelley. 2004. A Social Brain Sciences Approach to Studying the Self. Current Directions in Psychological Science 13 (5): 190193.

Todd F. Heatherton

Anne C. Krendl

Neuroscience

views updated May 14 2018

Neuroscience

Neuroscience is the study of the nervous system and its components. Neuroscientists may examine the nervous systems of humans and higher animals as well as simple multicellular nervous systems, or investigate nervous phenomenon at the cellular, organelle, or molecular level.

Neuroscience principally originated with three European scientists working at the end of the nineteenth and the beginning of the twentieth century. Camillo Golgi, an Italian physician, perfected a vital laboratory technique that first allowed scientists to trace the workings of the nervous system. Golgi completed a medical degree at the University of Padua in 1865, and then became a medical researcher at the University of Pavia. He was interested in cells and tissues, and experimented with ways to stain cells so they could be seen. Researchers before him had prepared cells with organic dyes, but Golgi found that staining with silver salts gave much clearer results. He became fascinated with nerve tissue , and using his staining process, he was the first to see in fine detail how this tissue was organized. He proved that the fibers of nerve cells did not meet completely, but left a gap, now known as a synapse . He devoted his life to mapping the structure of the nervous system. Golgi's work was furthered by a Spanish medical researcher, Santiago Ramón y Cajal. Ramón y Cajal, working at the University of Zaragoza, first improved on Golgi's staining method, then used it to discover the connection between the gray matter in the brain and the spinal cord. He shared the Nobel Prize in medicine with Golgi in 1906.

Golgi and Ramón y Cajal established the anatomy of the nervous system. The English neurologist Charles Scott Sherrington is credited with founding modern neuroscience with his work on the functioning of the nervous system. In other words, he brought the science from describing what the nervous system was to showing how it worked. His research explored the brain's ability to sense position and equilibrium, and the reflex actions of muscles.

Many other researchers continued to explore the workings of the nervous system. As laboratory imaging techniques progressed, neuroscientists were able to look at nerve cells at the molecular level. This allowed scientists to map the growth of nerve cells and nerve networks, and to study how individual cells process, store, and recall information.

Working with living brains to explore nerve function was all but impossible until the late 1970s, when sophisticated brain imaging machines were first developed. Positron emission tomography (PET) revolutionized neuroscience by allowing scientists to produce pictures of a working brain. Since then, scientists and engineers have come up with even better brain imaging systems, such as functional magnetic resonance imaging (fMRI). Using fMRI, neuroscientists can detect increases in blood oxygenation during brain function, and this shows which areas of the brain are most active. Brain activity occurs very quickly—neurons can respond to stimulus within 10 milliseconds—and very sophisticated equipment is needed to capture such fleeting movements. So-called neuroimaging is one of the hottest fields in neuroscience, as neurologists and technicians work together to find new ways of recording nerve action. Researchers in the late 1990s explored ways to map the flux of sodium ions within the brain, giving a direct record of neural activity, or to measure the scattering of light by brain tissues with fiber-optics. Both these techniques hope to give a more precise picture of which areas of the brain become active when a person thinks.

Neuroscience

views updated Jun 08 2018

Neuroscience

Neuroscience is the study of the nervous system and its components. Neuroscientists may examine the nervous systems of humans and higher animals as well as simple multicellular nervous systems, or investigate nervous phenomenon at the cellular, organelle, or molecular level.

In the twentieth century, neuroscience was greatly advanced by Wilder Penfield, who stimulated nerves in the brains of conscious patients to obtain a rudimentary functional map of the brain.

In the 1970s, it became possible to probe brain nerve function without the need for surgery. Sophisticated brain imaging techniques such as Positron emission tomography (PET) revolutionized neuroscience by allowing scientists to produce pictures of a working brain. Since then, scientists and engineers have come up with even better brain imaging systems, such as functional magnetic resonance imaging (fMRI). Using fMRI, neuroscientists can detect increases in blood oxygenation during brain function, and this shows which areas of the brain are most active. Brain activity occurs very quicklyneurons can respond to stimulus within 10 millisecondsand very sophisticated equipment is needed to capture such fleeting movements. Using neuroimaging, the flux of sodium ions within the brain can be measured, giving a direct record of neural activity. As well, the scattering of light by brain tissues can be measured with fiber-optics. Both these techniques hope to give a more precise picture of which areas of the brain become active when a person thinks.

neuroscience

views updated May 21 2018

neuroscience is the study of all aspects of nerves and the nervous system, in health and in disease. It includes the anatomy, physiology, chemistry, pharmacology, and pathology of nerve cells; the behavioural and psychological features that depend on the function of the nervous system; and the clinical disciplines that deal with them, such as neurology, neurosurgery, and psychiatry.

See nervous system.

More From encyclopedia.com