Technology and Ethics

views updated

Technology and Ethics


If the concept of technology includes human arts and crafts, generally, not simply the science-led high technology of modern times, then the influence of technology precedes the dawn of history itself. This entry assumes the more inclusive sense, taking implemented intelligent practical purpose as key to the subject, thus binding both traditional and high technologies into a common domain for ethical assessment.

Ethical assessment itself tends to divide into two great approaches. One tradition looks primarily to the consequences of what is being evaluated. Is an action or policy (or habit or trait of character, etc.) likely to produce good results? If so, on this tradition, the action is ethically right, morally to be approved, because of its consequences. The other tradition focuses primarily on the type of action or policy under consideration, whether it conforms to a rule that defines what is right. If so, the action reflects what is morally to be approved, regardless of its consequences.

It is clear that these approaches to ethical assessment can and often do argue past one another. The first position, here called outcome-ethics (also often called teleological or consequentialist ethics), may declare that policy P does no good, while the second position, here called rule-ethics (also often called deontological ethics), may insist that policy P flows inescapably from accepted rule R. Both may be correct in what they hold. But if they come to opposing views on the ethical wrongness or rightness of P, they have missed each other's point. Rule-ethics is not interested in outcomes but in the principle of the thing; outcome-ethics is impatient with abstract principles, when concrete helps and harms are at stake.


Reconciling ethical methods

Ethical assessment of technology is made still more difficult because of tensions within the approaches themselves. Outcome-ethics is based on maximizing good, but differences abound on defining this key term. Pleasure, honor, well-functioning, and so on, are all possible candidates, but different definitions would call for different policies and would cast different ethical light on the technological means for achieving them. Defining the good in terms of honor, for example, might give a positive ethical assessment to the erection of catapults and the casting of cannon, while defining it in terms of pleasure might call for a more negative stance toward the implements of war.

Another difficulty for outcome-ethics, however the good is defined, is in determining when the ethically relevant outcome has come out. Events roll on, and a positive situation (e.g., avoiding the pains of battle) may be supplanted by a negative one (e.g., falling under an oppressive conqueror), which in turn leads endlessly to others. The openness of the future seems to make an ethical verdict on any outcome only provisional.

If the future is a problem for outcome-ethics, so also is the past. Taken literally, the measuring of ethical worth by future outcomes alone seems to leave the past without ethical significance. A promise once made would need continual reevaluation by changing future probabilities. Destructive acts in the past should be punished, if at all, only by reference to future good to be achieved; good deeds, once done, should be rewarded, if at all, only by looking toward future results.

These counterintuitive consequences are escaped by rule-ethics, which does not need a prior concept of good for its concept of right, does not make its ethical judgments hostage to a receding future, and is not required to ignore ethical obligations from the past. But there are analogous equally deep problems for rule-ethics, if taken alone. First, there are many disagreements within this approach as to which rules should rule. Even excluding, in this entry, many conflicting claims of divine commands, profound disagreements may be expected on the source and authority of proffered ethical principles. Do they rise from an innate intuition? From societal enculturation? From a rational imperative? How much weight should these principles, given their sources, command? How general or specific should ethical rules be? The more they are detailed and specific, the more particular circumstanceseven outcomesdominate the rules; the more they are general and abstract, the more ethics loses touch with the concrete particularities of life. Rule-ethics gains much of its power from its principled distance from particular circumstances, but such distance makes it vulnerable to the temptations of fanaticism.

Somehow the clashing approaches to ethical assessment need to be reconciled if past technological decisions are to be adequately evaluated and future policies properly assessed. Technological implements are means to practical purposes. Since means are always aimed at ends, consequences must count in technological ethics. But also, since purposes can be formulated in terms of general motives, norms must also be applicable to technology.

A balance might be struck by acknowledging that concrete outcomes are the matter of ethical concern, while general rules constitute its form. Outcome-ethics could recognize that among alternative outcomes, some might be deeply unfair in their distribution of the good, and these would be worse outcomes than more equitable ones. But fairness is not simply one more addition to the good; it is a principle or rule on how the good should be spread. Cost-benefit analyses of technological outcomes are weak if they ignore the question of who bears the costs and who enjoys the benefits, and whether these are justly proportioned. Further, outcome-ethics needs to consider rights and wrongs of past technological decisions, even if nothing can be done about them any longer. Recognizing mistakes in the past and formulating guidelines to help avoid similar mistakes in the future, is an important ethical activity utilizing norms and principles, not just predictions. In these ways outcome-ethics (in order to do its own chosen job well) needs to learn from rule-ethics.

Reciprocally, rule-ethics needs to learn from outcome-ethics if it is to remain relevant to the fears and hopes that drive technological activity. Consequences do matter ethically to real people. Rules must not be allowed to blind moral concern from seeing concrete pains. Rules need to be responsive. This is especially obvious in the context of high technology, where possibilities of doing things become practical for the first time. When entirely new types of doing are contemplated, existing rule-books may not be adequate for guidance. This does not mean that rules are not relevant. But rules need to be extended, amended, and reviewed in light of novel facts and unprecedented possibilities. Modern technology, with its radical novelties, makes this extension of traditional ethics (both outcome- and rule-ethics) vital.


Examining historical cases

Over the course of human history, the outcomes sought by technological implements reflect every kind of practical good (real, imagined, or perverse) that human beings are capable of craving. Food, shelter, the death of enemies, the docility of slaves, accurate recordsa list without endhave been sufficiently valued so that intelligence has been put to work creating artifacts to secure them. For one grisly example, some medieval cities in Europe maintain so-called police museums displaying the technologies of punishments once meted out to malefactors. Cleverly devised implements of torture, including metal seats for roasting, iron claws for tearing, racks for dislocating, were the embodiment of purposeful design in quest of something taken by many in that society as a public good. We may shudder today at these artifacts, and question whether those goals of inflicting extreme pain were really good, or whether the larger good of public order really required such measures, just as it is possible to shudder and ask the same questions about the practical intelligence and values embodied in our publicly approved electric chairs, gas chambers, and paraphernalia of lethal injection. Here we encounter the appropriate critical task of technological ethics. Using the methods of outcome-ethics, one needs to examine whether the consequences sought can really be approved as good over the longest anticipated time horizon, and if so, whether in fact the means proposed are the best ones for achieving these critically examined results. At the same time, using the methods of rule-ethics, one must ask whether the principle of fairness is being served in distributing the various goods and ills concerned, whether the type of action contemplated falls under clearly stated and approved principles, whether these specific principles can be further justified by a hierarchical order of still more general norms, and whether this more comprehensive set of interlocking norms itself is clear, consistent, adequate to the larger circumstances, and coherently defensible to a thoughtful, unbiased judge.

A famous rejection of industrial technology occurred in the early nineteenth century in northern England, when the Luddites, followers of a (possibly mythical) Ned Luddpurportedly a home weaver displaced by new factory-based machinessmashed the power looms that threatened their ways of life. It is likely that this direct action was motivated more by economic than ethical values, and it was put down by gunfire and hangings, but many ethical issues are raised. What were the ethically relevant consequences of the shift from home industry to the factory system? One consequence was greatly increased volume of production, a prima facie good. Another was the replacement of a society of small producers, owners of their own looms, with a laboring class, required to sell their services to others who owned the means of production. This outcome is prima facie negative, involving a decrease of dignity, loss of cohesion in family life, and a corresponding increase in alienation and insecurity. The factory system, and eventually the assembly line, produce mixed consequences. Ethical examination needs to sort these out, and weigh them. In terms of principle, as well, there are profound issues of involuntary social change forced by technological efficiencies. To what extent should the autonomy of persons to choose their basic conditions of life be honored above the promise of greater economic productivity? On whom will the burdens fall when technology uproots life? Will those who bear these burdens receive a fair share of the new rewards, or will these flow disproportionately to others? Should society provide institutional opportunities for all the people involved to discuss and decide these ethically vital questions? Can any society that fails to do so consider itself genuinely democratic?

These questions reveal a serious general problem in technological ethics: the arrival of many revolutionary changes as faits accompli. Well before the appearance of high technologies, simple trial- and-error discoveries deeply altered valued conditions of life before they could be prevented or even discussed. Alfred Nobel (18331896) was keenly aware of how much his invention of dynamite would shake the world. The invention itself, in 1867, was wholly in the craft-tradition, a chance discovery that nitroglycerine could be absorbed by a certain porous siliceous earth and thus be made much safer to use. Various types of dynamite were used in blasting tunnels and mines, as well as in cutting canals, and building railbeds and roads. The consequences of these applications deserve analysis as ethically quite mixed, socially and environmentally, but of course the most spectacular use of the high explosives stemming from Nobel's invention was in war. Nobel himself established his prizes, including the Peace Prize, to coax the world toward better outcomes. He even dared to hope that the power of dynamite would make future wars unthinkable. In this he was sadly mistaken.


Assessing contemporary challenges

The leap from chemical high explosives to high nuclear technology may on the surface seem short, but in fact it represents a qualitative change. The high explosives of the nineteenth century were grounded in the same tradition of craft advancement that had characterized human technique from prehistoric times. A lucky empirical discovery was noted, remembered, repeated, applied, extended, and exploiteda paradigm instance of excellent practical reasoning. The atom bomb, in contrast, had to await a spectacular achievement in theoretical reasoning about nature even to be conceived. Specifically, a revolutionary change in understanding the relationship between matter and energy, wrought in the mathematical imagination of Albert Einstein (18791955), and stated in his famous energy-mass equation, E = mc 2, was a necessary condition for even recognizing the phenomenon of nuclear fission energy release when it occurred in German laboratories in 1938, and certainly also for seeking fission energy as a practical goal. Einstein himself was skeptical of this practical possibility, when first alerted to it in 1939 by Niels Bohr (18851962), but he was soon convinced by further experiments conducted immediately for him at Columbia University. Later in the same year, Einstein signed a letter to President Franklin D. Roosevelt alerting him to the danger of allowing German scientists to be first in unlocking the huge energies predicted by his theory. From this warning sprang the Manhattan Project, at that date the largest science-led technological project ever launched. The ethical ambiguities of the atom bomb, its use in the war against Japan and its role in deterring a third world war in the twentieth century, have been much discussed. Conflicting estimates of the consequences for good or ill, conflicting identification of the relevant ethical principles involved, are well known. Although of a new type, as offspring of theoretical intelligence, and of new scales in magnitude and urgency, nuclear bomb-making is subject to all the old ethical concerns.

What adds a special challenge for ethical assessment after the rise of theory-led technology is a new responsibility of assessing major technological innovations after they are conceived in principle but before they are born in practice. Technology policy can be ethically deliberated. Two examples will serve to illustrate.

Shifting from nuclear fission to fusion, we may assess the still-unrealized technology of electrical energy production by controlled thermonuclear reaction. In 1939, the hitherto mysterious source of the sun's prodigious energy output began to be understood theoretically as coming from energy released in a process by which four hydrogen nuclei are joined, when enormously high pressures and temperatures overcome electrical charge repulsion, thus forming one helium nucleus. This source is quite different in principle from the nuclear energies released when a heavy nucleus, such as the isotope uranium-235, splits into lighter nuclei. The two distinct processes are spectacularly combined in thermonuclear (so-called hydrogen) bombs, when the enormous but uncontrolled heat and pressure of a fission reaction forms the momentary star-like environment in which heavy hydrogen isotopes deuterium and tritium are forced to fuse into helium.

The theoretical lure to create useful electrical energy from a controlled fusion process is strong. The fuel, primarily deuterium, is plentiful, widely distributed, and relatively cheap. Every eight gallons of ordinary water contains about one gram of deuterium, which in principle could provide as much energy as 2,500 gallons of gasoline. There is no radioactive waste to guard or dispose. The practical difficulties, however, are extreme. The main technical problem is containing the unimaginably hot plasma of nuclei so tightly that a sustained reaction can occur. No material container could be used without instant vaporization. Strong magnets need to hold the writhing plasma away from all objects while a net surplus of energy is somehow extracted. Intense efforts have been under way for decades; perhaps someday the theoretical possibilities will be actualized.

But should fusion energy be practically realized? Ethical questions remain open for debate. Many positive outcomes are promised. Human society might be freed from dependence on oil, natural gas, and coal, with positive economic and environmental consequences. The rule of fairness in distribution of the fuel itself is better met, since water is a more widely available resource than oil or coal. Distribution of devices for deuterium extraction and of expensive fusion reactors would of course need scrutiny for fairness. One seldom considered question is whether human beings, in principle, should be freed of all need to deliberate and choose between energy expenditures. Has our species earned the right to be trusted with the capacity to pave over the world? This worrisome question forces attention again to the complexity of the long-term consequences that could reasonably be expected. The ethical debates have hardly begun.

The ethical debates over our second example, the technology of cloning, exploded into public consciousness with the appearance of Dolly, a cloned sheep, in 1997. Significantly, this is a technology led by theoretical biology, not to be confused with the techniques of selective breeding, which are as old as agriculture itself. Cloning technology is made possible by the revolution in understanding organic life brought about by the science of molecular biology, and especially by DNA analysis in genetics. Dolly's type of cloning, long believed to be impossible, depends on replacing the nuclear DNA in an egg cell with the nuclear DNA from an adult somatic cell of another organism. The donor cells are made quiescent by starvation, after which the donated DNA from those cells is fused into the host egg cells by electrical pulses, and the activated eggs, after a short period of in vitro development, are implanted into a womb.

Ethical assessment of various types of cloning in agricultural application, where the production of sheep, cattle, and pigs is concerned, is likely to dwell on outcomes more than rules, though there are significant voices calling for a moratorium or prohibition, in principle, against so-called Frankenfoods, because of their unnatural origin, or perhaps because of offense taken by the possibility of transgenic manipulation of genetic characteristics. Ethical consideration of consequences will point to the increased good of more and better quality food in a hungry world, while opponents will urge the possible dangers to health, both of consumers and of over-manipulated organisms designed too narrowly by genetic engineers focused exclusively on the dinner table. A great deal more information is needed on these hopes and fears. Meanwhile, the principle of informed consent may be important in the marketing of artificial life-forms, so that consumers are given full information about what they buy and eat.

Still more intense passions rise in ethical debates on the possible cloning of human beings. Here appeals to rules tend to come first, though ethical concerns about consequences are also important. Aside from religious objections, ethical principles concerned with the uniqueness and dignity of human individuals may be invoked. Certainly, in principle, no human person should be cloned merely to serve as an organ bank, to provide rejection-free transplants for an ailing heart, for example. But might cloning be allowed from a dying child's tissues to alleviate an aching heart, if this could provide a DNA-identical replacement to nurture and love? Although all might agree with the rule that no person (including clones) should be treated as nothing but a means, might there be legitimate mixed situations, where a clone could be valued primarily as an end but also to some degree as a means?

Factual outcomes need close attention here, as well. If the motive is to produce mere replicas of specific persons (musicians, athletes, soldiers, scientists, perished loved ones, etc.), this may be both objectionable in principle and also unachievable as an outcome. Cloning will never be able to replicate persons exactly. Persons, within general genetic limits, are partially self-creating beings. Monozygotic twins (or triplets, etc.) are not really identical persons, despite shared DNA and largely similar in utero and childhood conditions. Much greater differences of environmental conditions, in the womb and throughout life, will assure that even the identical DNA shared by donor and clone will not violate the latter's uniqueness of personhood. Ethical evaluation of this luring and horrifying possible technology, like many other technologies still aborning, needs to become more subtle in analyzing principles and anticipating outcomes.


See also Cloning; Information Technology; Biotechnology; Reproductive Technology; Technology; Technology and Religion


Bibliography

alcorn, paul a. social issues in technology: a format for investigation, 2nd edition. upper saddle river, n.j.: prentice hall, 1997.

barash, david p. the arms race and nuclear war. belmont, calif.: wadsworth, 1987.

barbour, ian g. ethics in an age of technology: the gifford lectures 19891991, vol. 2. san francisco: harper collins, 1993.

beckman, peter r.; campbell, larry; crumlish, paul w.; dobkowski, michael n.; and lee, steven p. the nuclear predicament: an introduction. englewood cliffs, n.j.: prentice hall. 1989.

bray, francesca. technology and gender: fabrics of power in late imperial china. berkeley: university of california press, 1997.

bromberg, joan lisa. fusion: science, politics, and the invention of a new energy source. cambridge, mass.: mit press, 1982.

ellul, jacques. the technological society, trans. john wilkinson. new york: vintage. 1964.

ferré, frederick. living and value: toward a constructive postmodern ethics. albany: state university of new york press, 2001.

forester, tom, ed. the information technology revolution. cambridge, mass.: mit press, 1985.

germain, gilbert g. a discourse on disenchantment: reflections on politics and technology. albany: state university of new york press. 1993.

higgs, eric; light, andrew; and strong, david; eds. technology and the good life? chicago: university of chicago press, 2000.

holdredge, craig. genetics and the manipulation of life: the forgotten factor of context. hudson, n.y.: lindisfarne press, 1996.

ihde, don. technology and the lifeworld: from garden to earth. bloomington: indiana university press, 1990.


iannone, a. pablo, ed. contemporary moral controversies in technology. new york: oxford university press, 1987.

johnson, deborah g., and nissenbaum, helen, eds. computers, ethics, and social values. englewood cliffs, n.j.: prentice hall, 1995.

sassower, raphael. cultural collisions: postmodern technoscience. new york: routledge, 1995.


tobey, ronald c. technology as freedom: the new deal and the electrical modernization of the american home. berkeley: university of california press, 1996.


winner, langdon. the whale and the reactor: a search for limits in an age of high technology. chicago: university of chicago press, 1986.

frederick ferrÉ

More From encyclopedia.com

About this article

Technology and Ethics

Updated About encyclopedia.com content Print Article

You Might Also Like

    NEARBY TERMS

    Technology and Ethics