Quantum Mechanics

views updated Jun 08 2018

QUANTUM MECHANICS

Quantum mechanics has the distinction of being considered both the most empirically successful and the most poorly understood theory in the history of physics.

To take an oft-cited example of the first point: The theoretically calculated value of the anomalous magnetic moment of the electron using quantum electrodynamics matches the observed value to twelve decimal places, arguably the best confirmed empirical prediction ever made. To illustrate the second point, we have the equally oft-cited remarks of Niels Bohr, "Anyone who says that they can contemplate quantum mechanics without becoming dizzy has not understood the concept in the least," and of Richard Feynman, "[We] have always had (secret, secret, close the doors!) we always have had a great deal of difficulty in understanding the world view that quantum mechanics represents." How could both of these circumstances obtain?

For the purposes of making predictions, quantum theory consists in a mathematical apparatus and has clear enough rules of thumb about how to apply the mathematical apparatus in various experimental situations. If one is doing an experiment or observing something, one must first associate a mathematical quantum state or wave function with the system under observation. For example, if one prepares in the laboratory an electron beam with a fixed momentum, then the quantum state of each electron in the beam will be something like a sine wave. In the case of a single particle it is common to visualize this wave function as one would a water wave: as an object extended in space. Although this visualization works for a single particle, it does not work in general, so care must be taken. But for the moment, this simple visualization works. The wave function for the electron is "spread out" in space.

The second part of the mathematical apparatus is a dynamical equation that specifies how the quantum state changes with time so long as no observation or measurement is made on the system. These equations have names like the Schrödinger equation (for nonrelativistic quantum mechanics) and the Dirac equation (for relativistic quantum field theory). In the case of the electron mentioned earlier the dynamical equation is relevantly similar to the dynamical equation for water waves, so we can visualize the quantum state as a little plane water wave moving in a certain direction. If the electron is shot at a screen with two slits in it, then the quantum state will behave similarly to a water wave that hits such a barrier: circularly expanding waves will emerge from each slit, and there will be constructive and destructive interference where those waves overlap. If beyond the slits there is a fluorescent screen, we can easily calculate what the quantum state "at the screen" will look like: It will have the peaks and troughs characteristic of interfering water waves.

Finally comes the interaction with the screen. Here is where things get tricky. One would naively expect that the correct way to understand what happens when the electron wave function reaches the screen is to build a physical model of the screen and apply quantum mechanics to it. But that is not what is done. Instead, the screen is treated as a measuring device and the interaction with the screen as a measurement, and new rules are brought into play.

The new rules require that one first decide what property the measuring device measures. In the case of a fixed screen it is taken that the screen measures the position of a particle. If instead of a fixed screen we had an absorber on springs, whose recoil is recorded, then the device would measure the momentum of the particle. These determinations are typically made by relying on classical judgments: There is no algorithm for determining what a generic (physically specified) object "measures," or indeed whether it measures anything at all. But laboratory apparatus for measuring position and momentum have been familiar from before the advent of quantum theory, so this poses no real practical problem.

Next, the property measured gets associated with a mathematical object called a Hermitian operator. Again, there is no algorithm for this, but for familiar classical properties like position and momentum the association is established. For each Hermitian operator there is an associated set of wave functions called the eigenstates of the operator. It is purely a matter of mathematics to determine the eigenstates. Each eigenstate has associated with it an eigenvalue : The eigenvalues are supposed to correspond to the possible outcomes of a measurement of the associated property, such as the possible values of position, momentum, or energy. (Conversely, it is typically assumed that for every Hermitian operator, there corresponds a measurable property and possible laboratory operations that would measure it, although there is no general method for specifying these.)

The last step in the recipe for making predictions can now be taken. When a system is measured, the wave function for the system is first expressed as a sum of terms, each term being an eigenstate of the relevant Hermitian operator. Any wave function can be expressed as a sum of such terms, with each term given a weight, which is a complex number. For example, if an operator has only two eigenstates, call them |1> and |2>, then any wave function can be expressed in the form α|1> + β |1>, with α and β complex numbers such that |α|2 + |β |2 = 1. (This is the case, for example, when we measure the so-called spin of an electron in a given direction, and always get one of two results: spin up or spin down.) Recall that each eigenstate is associated with a possible outcome of the measurement: |1>, for example, could be associated with getting spin up, and |2> with getting spin down. The quantum mechanical prediction is now typically a probabilistic one: the chance of getting the result associated with |1> is |α|2, and the chance of getting the result associated with |2> is |β |2. In general, one writes out the wave function of the system in terms of the appropriate eigenstates, and then the chance of getting the result associated with some eigenstate is just the square of the complex number that weights the state.

We can now see how quantum theory makes empirical predictions: So long as one knows the initial quantum state of the system and the right Hermitian operator to associate with the measurement, the theory will allow one to make probabilistic predictions for the outcome. Those predictions turn out to be exquisitely accurate.

If a Hermitian operator has only a finite number of eigenstates, or the eigenvalues of the operator are discrete, then any associated measurement should have only a discrete set of possible outcomes. This has already been in the case of spin; for a spin-1/2 particle such as an electron, there are only two eigenstates for the spin in a given direction. Physically, this means that when we do an experiment to measure spin (which may involve shooting a particle through an inhomogeneous magnetic field) we will get only one of two results: Either the particle will be deflected up a given amount or down a given amount (hence spin up and spin down). In this case the physical quantity is quantized ; it takes only a discrete set of values. But quantum theory does not require all physical magnitudes to be quantized in this way; the position, momentum, or energy of a free particle is not. So the heart of quantum theory is not a theory of discreteness, it is rather just the mathematical apparatus and the rules of application described earlier.

The Measurement Problem

Why, then, is the quantum theory so puzzling, or so much more obscure than, say, classical mechanics? One way that it differs from classical theory is that it provides only probabilistic predictions for experiments, and one might well wonder, as Albert Einstein famously did, whether this is because "God plays dice with the universe" (i.e., the physical world itself is not deterministic) or whether the probabilities merely reflect our incomplete knowledge of physical situation. But even apart from the probabilities, the formulation of the theory is rather peculiar. Rules are given for representing the physical state of a system and for how that physical state evolves and interacts with other systems when no measurement takes place. This evolution is perfectly deterministic. A different set of rules is applied to derive predictions for the outcomes of experiments, and these rules are not deterministic. Still, an experiment in a laboratory is just a species of physical interaction, and ought to be treatable as such. There should be a way to describe the physical situation in the lab, and the interaction of the measured system with the measuring device, that relies only on applying, say, the Schrödinger equation to the physical state of the system plus the lab.

John S. Bell put this point succinctly, "If you make axioms, rather than definitions and theorems, about the 'measurement' of anything else, then you commit redundancy and risk inconsistency" (1987, p. 166). You commit redundancy because while the axioms about measurement specify what should happen in a measurement situation, the measurement situation, considered as a simple physical interaction, ought also to be covered by the general theory of such interactions. You risk inconsistency because the redundancy produces the possibility that the measurement axioms will contradict the results of the second sort of treatment. This is indeed what happens in the standard approaches to quantum mechanics. The result is called the measurement problem.

The measurement problem arises from a conflict in the standard approach between treating a laboratory operation as a normal physical interaction and treating it as a measurement. To display this conflict, we need some way to represent the laboratory apparatus as a physical device and the interaction between the device and the system as a physical interaction. Now this might seem to be a daunting task; a piece of laboratory apparatus is typically large and complicated, comprising astronomically large numbers of atoms. By contrast, exact wave functions are hard to come by for anything much more complicated than a single hydrogen atom. How can we hope to treat the laboratory operation at a fundamental level?

Fortunately, there is a way around this problem. Although we cannot write down, in detail, the physical state of a large piece of apparatus, there are conditions that we must assume if we are to regard the apparatus as a good measuring device. There are necessary conditions for being a good measuring device, and since we do regard certain apparatus as such devices, we must be assuming that they meet these conditions.

Take the case of spin. If we choose a direction in space, call it the x direction, then there is a Hermitian operator that gets associated with the quantity x spin. That operator has two eigenstates, which we can represent as |x up>S and |x down>S. The subscript s indicates that these are states of the system to be measured. We have pieces of laboratory equipment that can be regarded as good devices for measuring the x spin of a particle. We can prepare such an apparatus in a state, call it the "ready" state, in which it will function as a good measuring device. Again, we do not know the exact physical details of this ready state, but we must assume such states exist and can be prepared. What physical characteristics must such a ready state have?

Besides the ready state, the apparatus must have two distinct indicator states, one of which corresponds to getting an "up" result of the measurement and the other that corresponds to getting a "down" result. And the key point about the physics of the apparatus is this: It must be that if the device in its ready state interacts with a particle in the state |x up>S, it will evolve into the indicator state that is associated with the up result, and if it interacts with a particle in state |x down>S, it will evolve into the other indicator state.

This can be put in a formal notation. The ready state of the apparatus can be represented by |ready>A, the up indicator state by |"up">A, and the down indicator state by |"down">A. If we feed an x spin up particle into the device, the initial physical state of the system plus apparatus is represented by |x up>S|ready>A, if we feed in an x spin down particle the initial state is |x down>S|ready>A. If the apparatus is, in fact, a good x spin measuring device, then the first initial state must evolve into a state in which the apparatus indicates up, that is, it must evolve into |x up>S|"up">A, and the second initial state must evolve into a state that indicates down, that is, |x down>S|"down">A. Using an arrow to represent the relevant time evolution, then, we have for any good x spin measuring device
|x up>S|ready>A |x up>S|"up">A and
|x down>S|ready>A |x down>S|"down">A.
We have not done any real physics yet, we have just indicated how the physics must come out if there are to be items that count as good x spin measuring devices, as we think there are.

The important part of the physics that generates the measurement problem is the arrow in the representations listed earlier, the physical evolution that takes one from the initial state of the system plus apparatus to the final state. Quantum theory provides laws of evolution for quantum states such as the Schrödinger and Dirac equations. These would be the equations one would use to model the evolution of the system plus apparatus as a normal physical evolution. And all these dynamical equations have a common mathematical feature; they are all linear equations. It is this feature of the quantum theory that generates the measurement problem, so we should pause over the notion of linearity.

The set of wave functions used in quantum theory form a vector space. This means that one can take a weighted sum of any set of wave functions and get another wave function. (The weights in this case are complex numbers, hence it is a complex vector space.) This property was mentioned earlier when it was noted that any wave function can be expressed as a weighted sum of the eigenvectors of an observable. An operator on a vector space is just an object that maps a vector as input to another vector as output. If the operator O maps the vector A to the vector B , we can write that as
O (A ) = B .
A linear operator has the feature that you get the same result whether to operate on a sum of two vectors or you first operate on the vectors and then takes the sum. That is, if O is a linear operator, then for all vectors A and B ,
O (A + B ) = O (A ) + O (B ).

The dynamical equations evidently correspond to operators; they take as input the initial physical state and give as output the final state, after a specified period has elapsed. But further, the Schrödinger and Dirac equations correspond to linear operators. Why is this important?

We have already seen how the physical state of a good x spin measuring device must evolve when fed a particle in the state |x up>S or the state |x down>S. But these are not the only spin states that the incoming particle can occupy. There is an infinitude of spin states, which correspond to all the wave functions that can be expressed as α|x up>S + β|x down>S, with α and β complex numbers such that |α |2 + |β |2 = 1. Correspondingly, there is an infinitude of possible directions in space in which one can orient a spin measuring device, and each of the directions is associated with a different Hermitian operator. For a direction at right angles to the x direction, call it the y direction, there are eigenstates |y up>S and |y down>S. These states can be expressed as weighted sums of the x spin eigenstates, and in the usual notation
|y up>S = 1/2|x up>S + 1/2|x down>S and
|y down>S = 1/2|x up>S 1/2|x down>S.
So what happens if we feed a particle in the state |y up>S into the good x spin measuring device?

Empirically, we know what happens: About half the time the apparatus ends up indicating "up" and about half the time it ends up indicating "down." There is nothing we are able to do to control the outcome: y up eigenstate particles that are identically prepared nonetheless yield different outcomes in this experiment.

If we use the usual predictive apparatus, we also get this result. The "up" result from the apparatus is associated with the eigenstate |x up>S and the "down" result associated with |x down>S. The general recipe tells us to express the incoming particle in terms of these eigenstates as 1/2|x up>S + 1/2|x down>S, and then to take the squares of the weighting factors to get the probabilities of the results. This yields a probabilistic prediction of 50 percent chance "up" and 50 percent chance "down," which corresponds to what we see in the lab.

But if instead of the usual predictive apparatus we use the general account of physical interactions, we get into trouble. In that case, we would represent the initial state of the system plus apparatus as |y up>S|ready>A. The dynamical equation can now be used to determine the physical state of the system plus apparatus at the end of the experiment.

But the linearity of the dynamical equations already determines what the answer must be. For
|y up>S|ready>A = (1/2|x up>S + 1/2|x down>S)|ready>A
= 1/2|x up>S|ready>A + 1/2|x down>S|ready>A.
But we know how each of the two terms of this superposition must evolve, since the apparatus is a good x spin measuring device. By linearity, this initial state must evolve into the final state
1/2|x up>S|"up">A + 1/2|x down>S|"down">A.
That is, the final state of the apparatus plus system must be a superposition of a state in which the apparatus yields the result "up" and a state in which the apparatus yields the result "down." That is what treating the measurement as a normal physical interaction must imply.

So by making axioms about measurements, we have both committed redundancy and achieved inconsistency. The axioms say that the outcome of the experiment is not determined by the initial state; each of two outcomes is possible, with a 50 percent chance of each. But the treatment of the measurement as a normal physical interaction implies that only one final physical state can occur. And furthermore, that final physical state is an extremely difficult one to understand. It appears to be neither a state in which the measuring apparatus is indicating "up" nor a state in which the apparatus is indicating "down," but some sort of symmetric combination of the two. If all the physical facts about the apparatus are somehow represented in its wave function, then it seems that at the end of the experiment the apparatus can neither be indicating up (and not down) nor down (and not up). But we always see one or the other when we do this experiment.

At this point our attention must clearly be turned to the mathematical object we have called the wave function. The wave function is supposed to represent the physical state of a system. The question is whether the wave function represents all of the physical features of a system, or whether systems represented by the same wave function could nevertheless be physically different. If one asserts the former, then one believes that the wave function is complete, if the latter, then the wave function is incomplete. The standard interpretations of the quantum formalism take the wave function to be complete, interpretations that take it to be incomplete are commonly called hidden variables theories (although that is a misleading name).

The wave function 1/2|x up>S|"up">A + 1/2|x down>S|"down">A does not represent the apparatus as indicating up (and not down) or as indicating down (and not up). So if the wave function is complete, the apparatus, at the end of the experiment, must neither be indicating up (and not down) nor down (and not up). But that flatly contradicts our direct experience of such apparatus. This is the measurement problem. As Bell puts it, "Either the wave function, as given by the Schrödinger equation, is not everything, or it is not right" (1987, p. 201).

Collapse Interpretations

collapse tied to observation

What is one to do? From the beginning of discussions of these matters, Einstein held the argument to show that the wave function is not everything and hence that quantum mechanics is incomplete. The wave function might represent part of the physical state of a system, or the wave function might represent some features of ensembles, collections, or systems, but the wave function cannot be a complete representation of the physical state an individual system, like the particular x spin measuring device in the laboratory after a particular experiment is done. For after the experiment, the apparatus evidently either indicates "up" or it indicates "down," but the wave function does not represent it as doing so.

By contrast, the founders of the quantum theory, especially Bohr, insisted that the wave function is complete. And they did not want to deny that the measuring device ends up indicating one determinate outcome. So the only option left was to deny that the wave function, as given by the Schrödinger equation, is right. At some times, the wave function must evolve in a way that is not correctly described by the Schrödinger equation. The wave function must "collapse." The standard interpretation of quantum mechanics holds that the wave function evolves, at different times, in either of two different ways. This view was given its canonical formulation in John von Neumann's Mathematical Foundations of Quantum Mechanics (1955). Von Neumann believed (incorrectly, as we will see) that he had proven the impossibility of supplementing the wave function with hidden variables, so he thought the wave function must be complete. When he comes to discuss the time evolution of systems, Von Neumann says "[w]e therefore have two fundamentally different types of interventions which can occur in a system S . First, the arbitrary [i.e., nondeterministic] changes by measurement. Second, the automatic [i.e., deterministic] changes which occur with the passage of time" (p. 351). The second type of change is described by, for example, the Schrödinger equation, and the first by an indeterministic process of collapse.

What the collapse dynamics must be can be read off from the results we want together with the thesis that the wave function is complete. For example, in the x spin measurement of the y spin up electron, we want there to be a 50 percent chance that the apparatus indicates "up" and a 50 percent chance that it indicates "down." But the only wave function that represents an apparatus indicating "up" is |"up">A, and the only wave function for an apparatus indicating "down" is |"down">A. So instead of a deterministic transition to the final state
1/ 2|x up>S|"up">A + 1/2|x down>S|"down">A
we must postulate an indeterministic transition with a 50 percent chance of yielding |x up>S|"up">A and a 50 percent chance of yielding |x down>S|"down">A.

It is clear what the collapse dynamics must do. What is completely unclear, though, is when it must do it. All Von Neumann's rules say is that we get collapses when measurements occur and deterministic evolutions "with the passage of time." But surely measurements also involve the passage of time; so under exactly what conditions do each of the evolutions obtain? Collapse theories, which postulate two distinct and incompatible forms of evolution of the wave function, require some account of when each type of evolution occurs.

Historically, this line of inquiry was influenced by the association of the problem with "measurement" or "observation." If one begins with the thought that the non-linear evolution happens only when a measurement or observation occurs, then the problem becomes one of specifying when a measurement or observation occurs. And this in turn suggests that we need a characterization of an observer who makes the observation. Pushing even further, one can arrive at the notion that observations require a conscious observer of a certain kind, folding the problem of consciousness into the mix. As Bell asks, "What exactly qualifies some physical systems to play the role of 'measurer'? Was the wave function of the world waiting to jump for thousands of millions of years until a single-celled living creature appeared? Or did it have to wait a little longer, for some better qualified system with a Ph.D.?" (1987, p. 117).

This line of thought was discussed by Eugene Wigner, "This way out of the difficulty amounts to the postulate that the equations of motion of quantum mechanics cease to be linear, in fact that they are grossly non-linear if conscious beings enter the picture" (1967, p. 183). Wigner suggests that the quantum measurement problem indicates "the effect of consciousness on physical phenomena," a possibility of almost incomprehensible implications (not the least of which: How could conscious beings evolve if there were no collapses, since the universe would surely be in a superposition of states with and without conscious beings!). In any case, Wigner's speculations never amounted to a physical theory, nor could they unless a physical characterization of a conscious system was forthcoming.

So if one adopts a collapse theory, and if the collapses are tied to measurements or observations, then one is left with the problem of giving a physical characterization of an observation or a measurement. Such physicists as Einstein and Bell were incredulous of the notion that conscious systems play such a central role in the physics of the universe.

spontaneous collapse theories

Nonetheless, precise theories of collapse do exist. The key to resolving the foregoing puzzle is to notice that although collapses must be of the right form to make the physical interactions called "observations" and "measurements" have determinate outcomes, there is no reason that the collapse dynamics itself need mention observation or measurement. The collapse dynamics merely must be of such a kind as to give outcomes in the right situations.

The most widely discussed theory of wave function collapse was developed by Gian Carlo Ghirardi, Alberto Rimini, and Tulio Weber (1986) and is called the spontaneous localization theory or, more commonly, the GRW theory. The theory postulates an account of wave function collapse that makes no mention of observation, measurement, consciousness, or anything of the sort. Rather, it supplies a universal rule for both how and when the collapse occurs. The "how" of the collapse involves localization in space; when the collapse occurs, one takes a single particle and multiplies its wave function, expressed as a function of space, by a narrow Gaussian (bell curve). This has the effect of localizing the particle near the center of the Gaussian, in the sense that most of the wave function will be near the center. If the wave function before the collapse is widely spread out over space, after the collapse it is much more heavily weighted to a particular region. The likelihood that a collapse will occur centered at a particular location depends on the square amplitude of the precollapse wave function for that location. The collapses, unlike Schrödinger evolution, are fundamentally nondeterministic, chancy events.

The GRW collapse does not perfectly locate the wave function at a point. It could not do so for straightforward physical reasons: The localization process will violate the conservation of energy, and the more narrowly the postcollapse wave function is confined, the more new energy is pumped into the system. If there were perfect localizations, the energy increase would be infiniteand immediately evident. (It follows from these same observations that even in the "standard" theory there are never collapses to perfectly precise positionseven after a so-called position measurement.)

Therefore, the GRW theory faces a decision: Exactly how localized should the localized wave function be? This corresponds to choosing a width for the Gaussian: The narrower the width, the more energy that is added to the system on collapse. The choice for this width is bounded in one direction by observationthe energy increase for the universe must be below observed bounds, and particular processes, such as spontaneous ionization, should be rareand in the other direction by the demand that the localization solve the measurement problem. As it happens, Ghirardi, Rimini, and Weber chose a value of about 105 centimeters for the width of the Gaussian. This is a new constant of nature.

Beside the "how" of the collapse, the GRW theory must specify the "when." It was here that we saw issues such as consciousness getting into the discussion: If collapses occur only when measurements or observations occur, then we must know when measurements or observations occur. The GRW theory slices through this problematic neatly; it simply postulates that the collapses take place at random, with a fixed probability per unit time. This introduces another new fundamental constant: the average time between collapses per particle. The value of that constant is also limited in two directions; on the one hand, we know from interference experiments that isolated individual particles almost never suffer collapses on the time scale of laboratory operations. On the other hand, the collapses must be frequent enough to resolve the measurement problem. The GRW theory employs a value of 1015 seconds, or about 100 million years, for this constant.

Clearly, the constant has been chosen large enough to solve one problem: Individual isolated particles will almost never suffer collapses in the laboratory. It is less clear, though, how it solves the measurement problem.

The key here is to note that actual experiments record their outcomes in the correlated positions of many, many particles. In our spin experiment we said that our spin measuring device must have two distinct indicator states: |"up"> and |"down">. To be a useful measuring device, these indicator states must be macroscopically distinguishable. This is achieved with macroscopic objectspointers, drops of ink, and so onto indicate the outcome. And a macroscopic object will have on the order of 1023 particles.

So suppose the outcome |"up"> corresponds to a pointer pointing to the right and the outcome |"down"> corresponds to the pointer pointing to the left. If there are no collapses, the device will end up with the wave function 1/2|x up>S|"up">A + 1/2|x down>S|"down">A. Now although it is unlikely that any particular particle in the pointer will suffer a collapse on the time scale of the experiment, because there are so many particles in the pointer, it is overwhelmingly likely that some particle or other in the pointer will suffer a collapse quickly: within about 108 seconds. And (this is the key), since in the state 1/2|x up>S|"up">A + 1/2|x down>S|"down">A all the particle positions are correlated with one another, if the collapse localizes a single particle in the pointer, it localizes all of them. So, if having the wave functions of all the particles in the pointer highly concentrated on the right (or on the left) suffices to solve the measurement problem, the problem will be solved before 104 seconds has elapsed.

The original GRW theory has been subject to much discussion. In a technical direction there have been similar theories, by Ghirardi and Rimini and by Philip Perle, that make the collapses to be continuous rather than discrete. More fundamentally, there have been two foundational questions: First, does the only approximate nature of the "localization" vitiate its usefulness in solving the measurement problem, and second, does the theory require a physical ontology distinct from the wave function? Several suggestions for such an additional ontology have been put forward, including a mass density in space-time, and discrete events ("flashes") in space-time.

The addition of such extra ontology, beyond the wave function, reminds us of the second horn of Bell's dilemma: Either the wave function as given by the Schrödinger equation is not right or it is not everything. The versions of the GRW theory that admit a mass density or the flashes postulate that the wave function is not everything, do so in such a way that the exact state of the extra ontology can be recovered from the wave function. The more radical proposal is that there is extra ontology, and its state cannot be read off the wave function. These are the so-called hidden variables theories.

Additional Variables Theories

According to an additional variables theory, the complete quantum state of the system after a measurement is indeed 1/2|x up>S|"up">A + 1/2|x down>S|"down">A. The outcome of the measurement cannot be read off of that state because the outcome is realized in the state of the additional variables, not in the wave function. It immediately follows that for any such theory, the additional ontology, the additional variables, had best not be "hidden": since the actual outcome is manifest, the additional variables had best be manifest. Indeed, on this approach the role of the wave function in the theory is to determine the evolution of the additional variables. The wave function, since it is made manifest only through this influence, is really the more "hidden" part of the ontology.

The best known and most intensively developed additional variables theory goes back to Louis de Broglie, but is most intimately associated with David Bohm. In its nonrelativistic particle version, Bohmian mechanics, physical objects are constituted of always-located point particles, just as was conceived in classical mechanics. At any given time, the physical state of a system comprises both the exact positions of the particles and a wave function. The wave function never collapses: it always obeys a linear dynamical equation like the Schrödinger equation. Nonetheless, at the end of the experiment the particles in the pointer will end up either all on the right or all on the left, thus solving the measurement problem. This is a consequence of the dynamics of the particles as determined by the wave function.

It happens that the particle dynamics in Bohmian mechanics is completely deterministic, although that is not fundamentally important to the theory and indeterministic versions of Bohm's approach have been developed. The dynamical equation used in Bohmian mechanics is much more importantly the simplest equation that one can write down if one assumes that the particle trajectories are to be determined by the wave function and that various symmetries are to be respected. If one starts with idea that there are particles and that quantum theory should be a theory of the motion of those particles that reproduces the predictions of the standard mathematical recipe, Bohmian mechanics is the most direct outcome.

Since Bohmian mechanics is a deterministic theory, the outcome of any experiment is fixed by the initial state of the system. The probabilities derived from the standard mathematical recipe must therefore be interpreted purely epistemically: they reflect our lack of knowledge of the initial state. This lack of knowledge turns out to have a physical explanation in Bohmian mechanics: Once one models any interaction designed to acquire information about a system as a physical interaction between a system and an observer, it can be shown to follow that initial uncertainty about the state of the target system cannot be reduced below a certain bound, given by the Heisenberg uncertainty relations.

This illustrates the degree to which the ontological "morals" of quantum theory are held hostage to interpretations. In the standard interpretation, when the wave function of a particle is spread out, there is no further fact about exactly where the particle is. (Because of this, position measurements in the standard theory are not really measurements, i.e., they do not reveal preexisting facts about positions.) In Bohm's interpretation, when the wave function is spread out, there is a fact about exactly where the particle is, but it follows from physical analysis that one cannot find out more exactly where it is without thereby altering the wave function (more properly, without altering the effective wave function that we use to make predictions). Similarly, in the standard interpretation, when we do a position measurement on a spread out particle, there is an indeterministic collapse that localizes the particleit gives it an approximate location. According to Bohm's theory the same interaction really is a measurement: It reveals the location that the particle already had. So it is a fool's errand to ask after "the ontological implications of quantum theory": the account of the physical world one gets depends critically on the interpretation of the formalism.

Bohm's approach has been adapted to other choices for the additional variables. In particular, interpretations of field theory have been pursued in two different ways: with field variables that evolve indeterministically, and with the addition to Bohmian mechanics the possibility of creating and annihilating particles in an indeterministic way. Each of these provides the wherewithal to treat standard field theory.

There have been extensive examinations of other ways to add additional variables to a noncollapse interpretation, largely under the rubric of modal interpretations. Both rules for specifying what the additional variables are and rules for the dynamics of the new variables have been investigated.

A Third Way?

There are also some rather radical attempts to reject each of Bell's two options and to maintain both that the wave function, as given by the Schrödinger equation, is right and that it is everythingthat is, it is descriptively complete. Since a wave function such as 1/2|x up>S|"up">A + 1/2|x down>S|"down">A does not indicate that one outcome rather than the other occurred, this requires maintaining that it is not the case that one outcome rather than the other occurred.

This denial can come in two flavors. One is to maintain that neither outcome occurred, or even seemed to occur, and one is only somehow under the illusion that one did. David Z. Albert (1992) investigated this option under the rubric the bare theory. Ultimately, the bare theory is insupportable, since any coherent account must at least allow that the quantum mechanical predictions appear to be correct.

The more famous attempt in this direction contends that, in some sense, both outcomes occur, albeit in different "worlds." Evidently, the wave function 1/2|x up>S|"up">A + 1/2|x down>S|"down">A can be written as the mathematical sum of two pieces, one of which corresponds to a situation with the apparatus indicating "up" and the other to a situation with the apparatus indicating "down." The many worlds theory attempts to interpret this as a single physical state, which somehow contains or supports two separate "worlds," one with each outcome.

The many worlds interpretation confronts several technical and interpretive hurdles. The first technical hurdle arises because any wave function can be written as the sum of other wave functions in an infinitude of ways. For example, consider the apparatus state 1/2 |"up">A + 1/2 |"down">A. Intuitively, this state does not represent the apparatus as having fired one way or another. This state can be called |D1>A. Similarly, |D2>A can represent the state 1/2 |"up">A 1/2 |"down">A, which also does not correspond to an apparatus with a definite outcome. The state 1/2|x up>S|"up">A + 1/2|x down>S|"down">A, which seems to consist in two "worlds," one with each outcome, can be written just as well as 1/2|y up>S|D1>A + 1/2|y down>S|D2>A. Written in this way, the state seems to comprise two worlds: one in which the electron has y spin up and the apparatus is not in a definite indicator state, the other in which the electron has y spin down, and the apparatus is in a distinct physical state that is equally not a definite indicator state. If these are the "two worlds," then the measurement problem has not been solved, it has been merely traded as a single world without a definite outcome for a pair of worlds neither of which has a definite outcome.

So the many worlds theory would first have to maintain that there is a preferred way to decompose the global wave function into "worlds." This is known as the preferred basis problem.

A more fundamental difficulty arises when one tries to understand the status of the probabilities in the many worlds theory. In a collapse theory the probabilities are probabilities for collapses to occur one way rather than another, and there is a physical fact about how the collapses occur, and therefore about frequencies of outcomes. In an additional variables theory the probabilities are about which values the additional variables take, and there is a physical fact about the values they take and therefore about frequencies of outcomes. But in the many worlds theory, whenever one does an experiment like the spin measurement described earlier, the world splits: There is no frequency with which one outcome occurs as opposed to the other. And more critically, that the world "splits" has nothing to do with the amplitude assigned to the two daughter worlds.

Suppose, for example, that instead of feeding a y spin up electron into our x spin measuring device, we feed in an electron whose state is 1/2|x up>S + 3/2 |x down>S. By linearity, at the end of the experiment, the state of the system plus apparatus is 1/2|x up>S|"up">A + 3/2 |x down>S|"down">A. Even if we have solved the preferred basis problem and can assert that there are now two worlds, one with each outcome, notice that we are evidently in exactly the same situation as in the original experiment: Whenever we do the experiment, the universe "splits." But the quantum formalism counsels us to have different expectations in the two cases: in the first case, we should expect to get an "up" outcome 50 percent of the time, in the second case only 25 percent of the time. It is unclear, in the many worlds theory, what the expectations are for, and why they should be different.

Another interpretation of the quantum formalism that has been considered is the many minds theory of Barry Loewer and Albert. Despite the name, the many minds theory is not allied in spirit with the many worlds theory: It is rather an additional variables theory in which the additional variables are purely mental subjective states. This is somewhat akin to Wigner's appeal to consciousness to solve the measurement problem, but where Wigner's minds affect the development of the wave function, the minds in this theory (as is typical for additional variables theories) do not. The physical measurement apparatus in the problematic case does not end up in a definite indicator state, but a mind is so constituted that it will, in this situation, have the subjective experience of seeing a particular indicator state. Which mental state the mind evolves into is indeterministic. The preferred basis problem is addressed by stipulating that there is an objectively preferred basis of physical states that are associated with distinct mental states.

The difference between the many worlds and the many minds approaches is made most vivid by noting that the latter theory does not need more than one mind to solve the measurement problem, where the problem is now understood as explaining the determinate nature of our experience. A multiplicity of minds are added to Loewer and Albert's theory only to recover a weak form of mind-body supervenience: Although the experiential state of an individual mind does not supervene on the physical state of the body with which it is associated, if one associates every body with an infinitude of minds, the distribution of their mental states can supervene on the physical state of the body.

A final attempt to address the problems of quantum mechanics deserves brief mention. Some maintain that the reason quantum mechanics is so confusing is not because the mathematical apparatus requires emendation (e.g., by explicitly adding a collapse or additional variables) or an interpretation (i.e., an account of exactly which mathematical objects represent physical facts), but because we reason about the quantum world in the wrong way. Classical logic, it is said, is what is leading us astray. We merely need to replace our patterns of inference with quantum logic.

There is a perfectly good mathematical subject that sometimes goes by the name quantum logic, which is the study, for example, of relations between subspaces of Hilbert space. These studies, like all mathematics, employ classical logic. There is, however, no sense in which these studies, by themselves, afford a solution to the measurement problem or explain how it is that experiments like those described earlier have unique, determinate outcomes.

The Wave Function, Entanglement, epr, and Non-Locality

For the purposes of this discussion, the wave function has been treated as if it were something like the electromagnetic field: a field defined on space. Although this is not too misleading when discussing a single particle, it is entirely inadequate when considering collections of particles. The wave function for N particles is a function not on physical space, but on the 3N-dimensional configuration space, each point of which specifies the exact location of all the N particles. This allows for the existence of entangled wave functions, in which the physical characteristics of even widely separated particles cannot be specified independently of one another.

Consider R and L, a pair of widely separated particles. Among the wave functions available for this pair is one that ascribes x spin up to R and x spin down to L, which is written as |x up>R|x down>L, and one that attributes x spin down to R and x spin up to L:|x down>R|x up>L. These are called product states, and all predictions from these states about how R will respond to a measurement are independent of what happens to L, and vice versa.

But besides these product states, there are entangled states like the singlet state : 1/2|x up>R|x down>L - 1/2|x down>R|x up>L. In this state the x spins of the two particles are said to be anticorrelated since a measurement of their x spins will yield either up for R and down for L or down for R and up for L (with a 50 percent chance for each outcome). Even so, if the wave function is complete, then neither particle in the singlet state has a determinate x spin: the state is evidently symmetrical between spin up and spin down for each particle considered individually.

How can the x spins of the particles be anticorrelated if neither particle has an x spin? The standard answer must appeal to dispositions: although in the singlet state neither particle is disposed to display a particular x spin on measurement, the pair is jointly disposed to display opposite x spins if both are measured. Put another way, on the standard interpretation, before either particle is measured neither has a determinate x spin, but after one of them is measured, and, say, displays x spin up, the other acquires a surefire disposition to display x spin down. And this change occurs simultaneously, even if the particles happen to be millions of miles apart.

Einstein found this to be a fundamentally objectionable feature of the standard interpretation of the wave function. In a paper coauthored with Boris Podolsky and Nathan Rosen (EPR 1935), Einstein pointed out this mysterious, instantaneous "spooky action-at-a-distance" built into the standard approach to quantum theory. It is uncontroversial that an x spin measurement carried out on L with, say, an "up" outcome" will result in a change of the wave function assigned to R: It will now be assigned the state |x down>R. If the wave function is complete, then this must reflect a physical change in the state of R because of the measurement carried out on L, even though there is no physical process that connects the two particles. What EPR pointed out (using particle positions rather than spin, but to the same effect) was that the correlations could easily be explained without postulating any such action-at-a-distance. The natural suggestion is that when we assign a particular pair of particles the state 1/2|x up>R|x down>L 1/2|x down>R|x up>L, it is a consequence of our ignorance of the real physical state of the pair: The pair is either in the product state |x up>R|x down>L or in the product state |x down>R|x up>L, with a 50 percent chance of each. This simple expedient will predict the same perfect anticorrelations without any need to invoke a real physical change of one particle consequent to the measurement of the other.

So matters stood until 1964, when Bell published his famous theorem. Bell showed that Einstein's approach could not possibly recover the full range of quantum mechanical predictions. That is, no theory can make the same predictions as quantum mechanics if it postulates (1) that distant particles, such as R and L, have each their own physical state definable independently of the other and (2) measurements made on each of the particles have no physical affect on the other. Entanglement of states turns out to be an essential featurearguably the central featureof quantum mechanics. And entanglement between widely separated particles implies non-locality: The physics of either particle cannot be specified without reference to the state and career of the other.

The spooky action-at-a-distance that Einstein noted is not just an artifact of an interpretation of the quantum formalism; it is an inherent feature of physical phenomena that can be verified in the laboratory. A fundamental problem is that the physical connection between the particles is not just spooky (unmediated by a continuous space-time process), it is superluminal. It remains unclear to this day how to reconcile this with the theory of relativity.

See also Bohm, David; Bohmian Mechanics; Many Worlds/Many Minds Interpretation of Quantum Mechanics; Modal Interpretation of Quantum Mechanics; Non-locality; Philosophy of Physics; Quantum Logic and Probability.

Bibliography

Albert, David Z. Quantum Mechanics and Experience. Cambridge, MA: Harvard University Press, 1992.

Bell, John S. Speakable and Unspeakable in Quantum Mechanics: Collected Papers on Quantum Philosophy. Cambridge, U.K.: Cambridge University Press, 1987.

Dürr, Detlef, Sheldon Goldstein, and Nino Zanghi. "Quantum Equilibrium and the Origin of Absolute Uncertainty." Journal of Statistical Physics 67 (1992): 843907.

Ghirardi, GianCarlo, Alberto Rimini, and Tulio Weber. "Unified Dynamics for Microscopic and Macroscopic Systems." Physical Review 34 (2) (1986): 470491.

Maudlin, Tim. Quantum Non-locality and Relativity: Metaphysical Intimations of Modern Physics. Malden, MA: Blackwell, 2002.

Von Neumann, John. Mathematical Foundations of Quantum Mechanics. Translated by Robert T. Beyer. Princeton, NJ: Princeton University Press, 1955.

Wheeler, John Archibald, and Wojciech Hubert Zurek, eds. Quantum Theory and Measurement. Princeton, NJ: Princeton University Press, 1983.

Wigner, Eugene. Symmetries and Reflections. Westport, CT: Greenwood Press, 1967.

Tim Maudlin (2005)

Physics, Quantum

views updated Jun 08 2018

Physics, Quantum


Quantum theory is one of the most successful theories in the history of physics. The accuracy of its predictions is astounding. The breath of its application is impressive. Quantum theory is used to explain how atoms behave, how elements can combine to form molecules, how light behaves, and even how black holes behave. There can be no doubt that there is something very right about quantum theory.

But at the same time, it is difficult to understand what quantum theory is really saying about the world. In fact, it is not clear that quantum theory gives any consistent picture of what the physical world is like. Quantum theory seems to say that light is both wavelike and particlelike. It seems to say that objects can be in two places at once, or even that cats can be both alive and dead, or neither alive nor dead, orwhat? There can be no doubt that there is something troubling about quantum theory.


Early research

Quantum theory, more or less as it is known at the beginning of the twenty-first century, was developed during the first quarter of the twentieth century in response to several problems that had arisen with classical mechanics. The first is the problem of blackbody radiation. A blackbody is any physical body that absorbs all incident radiation. As the blackbody continues to absorb radiation, its internal energy increases until, like a bucket full of water, it can hold no more and must re-emit radiation equal in energy to any additional incident radiation. The problem is, most simply, that the classical prediction for the energy of the emitted radiation as a function of its frequency is wrong. The problem was well known but unsolved until the German physicist Max Planck (18581947) proposed in 1900 the hypothesis that the energy absorbed and emitted by the blackbody could come only in discrete amounts, multiples of some constant, finite, amount of energy. While Planck himself never felt satisfied with this hypothesis as more than a localized, phenomenological description of the behavior of blackbodies, others eventually accepted Planck's hypothesis as a revolution, a claim that energy itself can come in only discrete amounts, the quanta of quantum theory.

A second problem with classical mechanics was the challenge of describing the spectrum of hydrogen, and eventually, other elements. Atomic spectra are most easily understood in light of a fundamental formula linking the energy of light with its frequency: E = h ν, where E is the energy of light, h is a constant (Planck's constant, as it turns out), and ν is the frequency of the light (which determines the color of the visible light).

Suppose, now, that the energy of some atom (for example, an atom of hydrogen) is increased. If the atom is subsequently allowed to relax, it releases the added energy in the form of (electromagnetic) radiation. The relationship E = h ν reveals that the frequency of the light depends on the amount of energy that the atom emits as it relaxes. Prior to the development of quantum theory, the best classical theory of the atom was Ernest Rutherford's (18711937), according to which negatively charged electrons orbit a central positively charged nucleus. The energy of a hydrogen atom (which has only one electron) corresponds to the distance of the electron from the nucleus. (The further the electron is, the higher its energy is.) Rutherford's model predicts that the radiation emitted by a hydrogen atom could have any of a continuous set of possible energies, depending on the distance of its electron from the nucleus. Hence a large number of hydrogen atoms with energies randomly distributed among them will emit light of many frequencies. However, in the nineteenth century it was well known that hydrogen emits only a few frequencies of visible light.

In 1913, Niels Bohr (18851962) introduced the hypothesis that the electrons in an atom can be only certain distances from the nucleus; that is, they can exist in only certain "orbits" around the nucleus. The differences in the energies of these orbits correspond to the possible energies of the radiation emitted by the atom. When an electron with high energy "falls" to a lower orbit, it releases just the amount of energy that is the difference between the energies of the higher and lower orbits. Because only certain orbits are possible, the atom can emit only certain frequencies of light.

The crucial part of Bohr's proposal is that electrons cannot occupy the space between the orbits, so that when the electron passes from one orbit to another, it "jumps" between them without passing through the space in between. Thus, Bohr's model violates the principle of classical mechanics that particles always follow continuous trajectories. In other words, Bohr's model left little doubt that classical mechanics had to be abandoned.

Over the next twelve years, the search was on for a replacement. By 1926, as the result of considerable experimental and theoretical work on the part of numerous physicists, two theoriesexperimentally equivalentwere introduced, namely, Werner Heisenberg's (19011976) matrix mechanics and Erwin Schrödinger's (18871961) wave mechanics.


Matrix mechanics. Heisenberg's matrix mechanics arose out of a general approach to quantum theory advocated already by Bohr and Wolfgang Pauli (19001958), among others. In Heisenberg's hands, this approach became a commitment to remove from the theory any quantities that cannot be observed. Heisenberg took as his "observable" such things as the transition probabilities of the hydrogen atom (the probability that an electron would make a transition from a higher to a lower orbit). Heisenberg introduced operators that, in essence, represented such observable quantities mathematically. Soon thereafter, Max Born (18821970) recognized Heisenberg's operators as matrices, which were already well understood mathematically.

Heisenberg's operators can be used in place of the continuous variables of Newtonian physics. Indeed, one can replace Newtonian position and momentum with their matrix "equivalents" and obtain the equations of motion of quantum theory, commonly called (in this form) Heisenberg's equations. The procedure of replacing classical (Newtonian) quantities with the analogous operators is known as quantization. A complete understanding of quantization remains elusive, due primarily to the fact that quantum-mechanical operators can be incompatible, which means in particular that they cannot be comeasured.

Wave mechanics. Schrödinger's wave mechanics arose from a different line of reasoning, primarily due to Louis de Broglie (18921987) and Albert Einstein (18791955). Einstein had for some time expressed a commitment to a physical world that can be adequately described causally, which meant that it could be described in terms of quantities that evolve continuously in time. Einstein, who was primarily responsible for showing that light has both particlelike and wavelike properties, hoped early on for a theory that somehow "fused" these two aspects of light into a single consistent theory.

In 1923, de Broglie instituted the program of wave mechanics. He was impressed by the Hamilton-Jacobi approach to classical physics, in which the fundamental equations are wave equations, but the fundamental objects of the theory are still particles, whose trajectories are determined by the waves. Recalling this formalism, de Broglie suggested that the particlelike and wavelike properties of light might be reconcilable in similar fashion. Einstein's enthusiasm for de Broglie's ideasboth because de Broglie's waves evolved continuously and because the theory fused the wavelike and particlelike properties of light and matterstimulated Schrödinger to work on the problem from that point of view, and in 1926 Schrödinger published his wave mechanics.

It was quickly realized that matrix mechanics and wave mechanics are experimentally equivalent. Shortly thereafter, in 1932, John von Neumann (19031957) showed their equivalence rigorously by introducing the Hilbert space formalism of quantum theory. The Uncertainty Principle serves to illustrate the equivalence. The Uncertainty Principle follows immediately from Heisenberg's matrix mechanics. Indeed, in only a few lines of argument, one can arrive at the mathematical statement of the Uncertainty Principle for any operators (physical quantities) A and B : ΔA ΔB Kh, where K is a constant that depends on A and B, and h is Planck's constant. The symbol ΔA means "root mean square deviation of A " and is a measure of the statistical dispersion (uncertainty) in a set of values of A. So the Uncertainty Principle says that the statistical dispersion in values of A times the statistical dispersion in values of B are always greater than or equal to some constant. If (and only if) A and B are incompatible (see above) then this constant is greater than zero, so that it is impossible to measure a both A and B on an ensemble of physical systems in such a way as to have no dispersion in the results.

Schrödinger's wave mechanics gives rise to the same result. It is easiest to see how it does so in the context of the classic example involving position and momentum, which are incompatible quantities. In the context of Schrödinger's wave mechanics, the probability of finding a particle at a given location is determined by the amplitude (height) of the wave at that location. Hence, a particle with a definite position is represented by a "wave" that is zero everywhere except at the location of the particle. On the other hand, a particle with definite momentum is represented by a wave that is flat (i.e., has the same amplitude at all points), and, conversely to position, momentum becomes more and more "spread" as the wave becomes more sharply peaked. Hence the more precisely one can predict the location of a particle, the less precisely one can predict its momentum. A more quantitative version of these considerations leads, again, to the Uncertainty Principle.

Quantum field theory. Perhaps the major development after the original formulation of quantum theory by Heisenberg and Schrödinger (with further articulation by many others) was the extension of quantum mechanics to fields, resulting in quantum field theory. Paul Dirac (19021984) and others extended the work to relativistic field theories. The central idea is the same: The quantities of classical field theory are quanticized in an appropriate way. Work on quantum field theory is ongoing, a central unresolved issue being how one can incorporate the force of gravity, and specifically Einstein's relativistic field theory of gravity, into the framework of relativistic quantum field theory. A related, though even more speculative, area of research is quantum cosmology, which is, more or less, the attempt to discern how Big Bang theory (itself derived from Einstein's Theory of Gravity) will have to be modified in the light of quantum gravity.


Contemporary research

Contemporary research in the interpretation of quantum theory focuses on two key issues: the "measurement problem" and locality (Bell's Theorem).

Schrödinger's cat. Although the essence of the measurement problem was clear to several researchers even before 1925, it was perhaps first clearly stated in 1935 by Schrödinger. In his famous example, Schrödinger imagines a cat in the following unfortunate situation. A box, containing the cat, also contains a sample of some radioactive substance that has a probability of 1/2 to decay within one hour. Any decay is detected by a Geiger counter, which releases poison into the box if it detects a decay. At the end of an hour, the state of the cat is indeterminate between "alive" and "dead," in much the same way that a state of definite position is indeterminate with regard to momentum.

The cat is said to be in a superposition of the alive state and the dead state. In standard quantum theory, such a superposition is interpreted to mean that the cat is neither determinately alive, nor determinately dead. But, says Schrödinger, while one might be able to accept that particles such as electrons are somehow indeterminate with respect to position or momentum, one can hardly accept indeterminacy in the state of a cat.

More generally, Schrödinger's point is that indeterminacy at the level of the usual objects of quantum theory (electrons, protons, and so on) can easily be transformed into indeterminacy at the level of everyday objects (such as cats, pointers on measuring apparatuses, and so on) simply by coupling the state of the everyday object to the state of the quantum object. Such couplings are exactly the source of our ability to measure the quantum objects in the first place. Hence, the problem that Schrödinger originally raised with respect to the cat is now called the measurement problem : Everyday objects such as cats and pointers can, according to standard quantum theory, be indeterminate in state. For example, a cat might be indeterminate with respect to whether it is alive. A pointer might be indeterminate with respect to its location (i.e., it is pointing in no particular direction).

Approaches to the measurement problem. Thus, the interpretation of quantum theory faces a serious problem, the measurement problem, to which there have been many approaches. One approach, apparently advocated by Einstein, is to search for a hidden-variables theory to underwrite the probabilities of standard quantum theory. The central idea here is that the indeterminate description of physical systems provided by quantum theory is incomplete. Hidden variables (so-called because they are "hidden" from standard quantum theory) complete the quantum-mechanical description in a way that renders the state of the system determinate in the relevant sense. The most famous example of a successful hidden-variables theory is the 1952 theory of David Bohm (19171992), itself an extension of a theory proposed by Louis de Broglie in the 1920s. In the Broglie-Bohm theory, particles always have determinate positions, and those positions evolve deterministically as a function of their own initial position and the initial positions of all the other particles in the universe. The probabilities of standard quantum theory are obtained by averaging over the possible initial positions of the particles, so that the probabilities of standard quantum theory are due to ignorance of the initial conditions, just as in classical mechanics. According to some, the problematic feature of this theory is its nonlocality the velocity of a given particle can depend instantaneously on the positions of particles arbitrarily far away.

Other hidden-variables theories exist, both deterministic and indeterministic. They have some basic features in common with the de Broglie-Bohm theory, although they do not all take position to be "preferred"some choose other preferred quantities. In the de Broglie-Bohm theory, position is said to be "preferred" because all particles always have a definite position, by stipulation.

There are other approaches to solving the measurement problem. One set of approaches involves so-called Many-worlds interpretations, according to which each of the possibilities inherent in a superposition is in fact actual, though each in its own distinct and independent "world." There is a variant, the Many-minds theory, according to which each observer observes each possibility, though with distinct and independent "minds." These interpretations have a notoriously difficult time reproducing the probabilities of quantum theory in a convincing way. A slightly more technical, but perhaps even more troubling, issue arises from the fact that any superposition can be "decomposed" into possibilities in an infinity of ways. So, for example, a superposition of "alive" and "dead" can also be decomposed into other pairs of possibilities. It is unclear how Many-worlds interpretations determine which decomposition is used to define the "worlds," though there are various proposals.

Yet another set of approaches to the measurement problem is loosely connected to the Copenhagen Interpretation of quantum theory. According to these approaches, physical quantities have meaning only in the context of an experimental arrangement designed to measure them. These approaches insist that the standard quantum-mechanical state is considered to describe our ignorance about which properties a system has in cases where the possible properties are determined by the experimental context. Only those properties that could be revealed in this experimental context are considered "possible." In this way, these interpretations sidestep the issue of which decomposition of a superposition one should take to describe the possibilities over which the probabilities are defined. Once a measurement is made, the superposition is "collapsed" to the possibility that was in fact realized by the measurement. In this context, the collapse is a natural thing to do, because the quantum mechanical state represents our ignorance about which experimental possibility would turn up. The major problem facing these approaches is to define measurement and experimental context in a sufficiently rigorous way.

Another set of approaches are the realistic collapse proposals. Like the Copenhagen approaches, they take the quantum-mechanical state of a system to be its complete description, but unlike them, these approaches allow the meaningfulness of physical properties even outside of the appropriate experimental contexts. The issue of how to specify when collapse will occur is thus somewhat more pressing for these approaches because the collapse represents not a change in our knowledge, but a physical change in the world. There are several attempts to provide an account of when collapse will occur, perhaps the two most famous being observer-induced collapse and spontaneous localization theories. According to the former, notably advocated by Eugene Wigner (19021995), the act of observation by a conscious being has a real effect on the physical state of the world, causing it to change from a superposition to a state representing the world as perceived by the conscious observer. This approach faces the very significant problem of explaining why there should be any connection between the act of conscious observation and the state of, for example, some electron in a hydrogen atom.

The spontaneous-localization theories define an observer-independent mechanism for collapse that depends, for example, on the number of particles in a physical system. For low numbers of particles the rate of collapse is very slow, whereas for higher values, the rate of collapse is very high. The collapse itself occurs continuously, by means of a randomly distributed infinitesimal deformation of the quantum state. The dynamics of the collapse are designed to reproduce the probabilities of quantum theory to a very high degree of accuracy.


The problem of nonlocality. The other major issue facing the interpretation of quantum theory is nonlocality. In 1964, John Bell (19281990) proved that, under natural conditions, any interpretation of quantum theory must be nonlocal. More precisely, in certain experimental situations, the states of well-separated pairs of particles are correlated in a way that cannot be explained in terms of a common cause. One can think, here, of everyday cases to illustrate the point. Suppose you write the same word on two pieces of paper and send them to two people, who open the envelopes simultaneously and discover the word. There is a correlation between these two events (they both see the same word), but the correlation is easily explained in terms of a common cause, you.

Under certain experimental circumstances, particles exhibit similar correlations in their states, and yet those correlations cannot be explained in terms of a common cause. It seems, instead, that one must invoke nonlocal explanations, explanations that resort to the idea that something in the vicinity of one of the particles instantaneously influences the state of the other particle, even though the particles are far apart.

On the face of it, nonlocality contradicts special relativity. According to standard interpretations of the theory of relativity, causal influences cannot travel faster than light, and in particular, events in one region of space cannot influence events in other regions of space if the influence would have to travel faster than light to get from one region to the other in time to influence the event.

However, the matter is not so simple as a direct contradiction between quantum theory and relativity. The best arguments for the absence of faster-than-light influences in relativity are based on the fact that faster-than-light communicationmore specifically, transfer of informationcan lead to causal paradoxes. But in the situations to which Bell's theorem applies, the purported faster-than-light influences cannot be exploited to enable faster-than-light communication. This result is attributable to the indeterministic nature of standard quantum theory. In de Broglie and Bohm's deterministic hidden-variable theory, one could exploit knowledge of the values of the hidden variables to send faster-than-light signals; however, such knowledge is, in Bohm's theory, physically impossible in principle.

Other areas of research. There are of course many other areas of research in the interpretation of quantum theory. These include traditional areas of concern, such as the classical limit of quantum theory. How do the nonclassical predictions of quantum theory become (roughly) equivalent to the (roughly accurate) predictions of classical mechanics in some appropriate limit? How is this limit defined? In general, what is the relationship between classical and quantum theory? Other areas of research arise from work in quantum theory itself, perhaps the most notable being the work in quantum computation. It appears that a quantum computer could perform computations in qualitatively faster time than a classical computer. Apart from obvious practical considerations, the possibility of quantum computers raises questions about traditional conceptions of computation, and possibly, thereby, about traditional philosophical uses of those conceptions, especially concerning the analogies often drawn between human thought and computation.


Applications to religious thought

Quantum theory was the concern of numerous religious thinkers during the twentieth century. Given the obviously provisional status of the theory, not to mention the extremely uncertain state of its interpretation, one must proceed with great caution here, but we can at least note some areas of religious thought to which quantum theory, or its interpretation, has often been taken to be relevant.

Perhaps the most obvious is the issue of whether the world is ultimately deterministic or not. Several thinkers, including such scientists as Isaac Newton (16421727) and Pierre-Simon Laplace (17491827), have seen important ties to religious thought. In the case of classical mechanics, Newton had good reason to believe that his theory did not completely determine the phenomena, whereas Laplace (who played a key role in patching up the areas where Newton saw the theory to fail) had good reason to think that the theory did completely and deterministically describe the world. Newton thus saw room for God's action in the world; Laplace did not.

In the case of quantum theory the situation is considerably more difficult because there exist both indeterministic and deterministic interpretations of the theory, each of which is empirically adequate. Indeed, they are empirically equivalent. Those who, for various reasons, have adopted one or the other interpretation, though, have gone on to investigate the consequences for religious thought. Some, for example, see in quantum indeterminism an explanation of the possibility of human free will. Others have suggested that quantum indeterminism leaves an important role for God in the universe, namely, as the source of the agreement between actual relative frequencies and the probabilistic predictions of quantum theory.

Other thinkers have seen similarities between aspects of quantum theory and Eastern religions, notably various strains of Buddhism and Daoism. Fritjof Capra (1939), who is perhaps most famous in this regard, has drawn analogies between issues that arise from the measurement problem and quantum nonlocality and what he takes to be Eastern commitments to the "connectedness" of all things. Other thinkers have seen in the interpretive problems of quantum theory evidence of a limitation in science's ability to provide a comprehensive understanding of the world, thus making room for other, perhaps religious, modes of understanding. Still others, drawing on views such as Wigner's (according to which conscious observation plays a crucial role in making the world determinate), see in quantum theory a justification of what they take to be traditional religious views about the role of conscious beings in the world. Others, including Capra, see affinities between wave-particle duality, or more generally, the duality implicit in the Uncertainty Principle, and various purportedly Eastern views about duality (for example, the Taoist doctrine of yin and yang, or the Buddhist use of koans).

Finally, quantum cosmology has provided some with material for speculation. One must be extraordinarily careful here because there is, at present, no satisfactory theory of quantum gravity, much less of quantum cosmology. Nonetheless, a couple of (largely negative) points can be made. First, it is clear that the standard Big Bang theory will have to be modified, somehow or other, in light of quantum theory. Hence, the considerable discussion to date of the religious consequences of the Big Bang theory will also need to be reevaluated. Second, due to considerations that arise from the time-energy Uncertainty Principle, even a satisfactory quantum cosmology is unlikely to address what happened in the early universe prior to the Planck time (approximately 1043 seconds) because quantum theory itself holds that units of time less than the Planck time are (perhaps) meaningless. Some have seen here a fundamental limit in scientific analysis, a limit that is implied by the science itself. Of course, others see an opportunity for a successor theory.

This situation is, in fact, indicative of the state of quantum theory as a whole. While it is an empirically successful theory, its interpretations, and hence any consequences it might have for religious thought, remain matters of speculation.


See also Copenhagen Interpretation; EPR Paradox; Heisenberg's Uncertainty Principle; Indeterminism; Locality; Many-worlds Hypothesis; Phase Space; Planck Time; Quantum Cosmologies; Quantum Field Theory; SchrÖdinger's Cat; Wave-particle Duality


Bibliography

bohm, david. quantum theory. new york: dover, 1989.

gribbin, john. in search of schrödinger's cat: quantum physics and reality. new york: bantam, 1984.


heisenberg, werner. physical principles of the quantum theory. new york: dover, 1930.

shankar, ramamurti. principles of quantum mechanics. new york: plenum, 1994.


w. michael dickson

Quantum Mechanics

views updated May 23 2018

Quantum Mechanics

Quantum results

Theoretical implications of quantum mechanics

Resources

Quantum mechanics is a fundamental part of theoretical physics that involves the theory used to provide an understanding of the behavior of microscopic particles such as electrons and atoms. More

importantly, quantum mechanics describes the relationships between energy and matter on atomic and subatomic scales. Thus, it replaces classical mechanics and electromagnetism when dealing with these very small scales. Quantum mechanics is used in such scientific fields as computational chemistry, condensed matter, molecular physics, nuclear physics, particle physics, and quantum chemistry.

At the beginning of the twentieth century, German physicist Maxwell Planck (18581947) proposed that atoms absorb or emit electromagnetic radiation in bundles of energy termed quanta. This quantum concept seemed counter-intuitive to well-established Newtonian physics. Ultimately, advancements associated with quantum mechanics (e.g., the uncertainty principle) also had profound implications with regard to the philosophical scientific arguments regarding the limitations of human knowledge.

Planck proposed that atoms absorb or emit electromagnetic radiation in defined and discrete units (quanta). Plancks quantum theory also asserted that the energy of light was directly proportional to its frequency, and this proved a powerful observation that accounted for a wide range of physical phenomena.

Plancks constant relates the energy of a photon with the frequency of light. Along with the constant for the speed of light (c), Plancks constant (h = 6.626 x 10-34 joule-second) is a fundamental constant of nature.

Prior to Plancks work, electromagnetic radiation (light) was thought to travel in waves with an infinite number of available frequencies and wavelengths. Plancks work focused on attempting to explain the limited spectrum of light emitted by hot objects and to explain the absence of what was termed the violet catastrophe predicted by nineteenth century theories developed by Prussian physicist Wilhelm Wien (1864-1928) and English physicist Baron (John William Strutt) Rayleigh (1842-1919).

Danish physicist Niels Bohr (1885-1962) studied Plancks quantum theory of radiation and worked in England with physicists J. J. Thomson (1856-1940), and Ernest Rutherford (1871-1937) improving their classical models of the atom by incorporating quantum theory. During this time, Bohr developed his model of atomic structure. To account for the observed properties of hydrogen, Bohr proposed that electrons existed only in certain orbits and that, instead of traveling between orbits, electrons made instantaneous quantum leaps or jumps between allowed orbits. According to the Bohr model, when an electron is excited by energy it jumps from its ground state to an excited state (i.e., a higher energy orbital). The excited atom can then emit energy only in certain (quantized) amounts as its electrons jump back to lower energy orbits located closer to the nucleus. This excess energy is emitted in quanta of electromagnetic radiation (photons of light) that have exactly the same energy as the difference in energy between the orbits jumped by the electron.

The electron quantum leaps between orbits proposed by the Bohr model accounted for Planks observations that atoms emit or absorb electromagnetic radiation in quanta. Bohrs model also explained many important properties of the photoelectric effect described by German-American mathematician and physicist Albert Einstein (1879-1955).

Using probability theory, and allowing for a wave-particle duality, quantum mechanics also replaced classical mechanics as the method by which to describe interactions between subatomic particles. Quantum mechanics replaced electron orbitals of classical atomic models with allowable values for angular momentum (angular velocity multiplied by mass) and depicted electrons position in terms of probability clouds and regions.

In the 1920s, the concept of quantization and its application to physical phenomena was further advanced by more mathematically complex models based on the work of French physicist Louis Victor de Broglie (1892-1987) and Austrian physicist Erwin Schrodinger (1887-1961) that depicted the particle and wave nature of electrons. De Broglie showed that the electron was not merely a particle but a waveform. This proposal led Schrodinger to publish his wave equation in 1926. Schrodingers work described electrons as a standing wave surrounding the nucleus and his system of quantum mechanics is called wave mechanics. German physicist Max Born (1882-1970) and English physicist P.A.M Dirac (1902-1984) made further advances in defining the subatomic particles (principally the electron) as a wave rather than as a particle and in reconciling portions of quantum theory with relativity theory.

Working at about the same time, German physicist Werner Heisenberg (1901-1976) formulated the first complete and self-consistent theory of quantum mechanics. Matrixmathematics was well-established by the 1920s, and Heisenberg applied this powerful tool to quantum mechanics. In 1926, Heisenberg put forward his uncertainty principle that states that two complementary properties of a system, such as position and momentum, can never both be known exactly. This proposition helped cement the dual nature of particles (e.g., light can be described as having both wave and particle characteristics). Electromagnetic radiation (one region of the spectrum of which comprises visible light) is now understood as having both particle and wave-like properties.

In 1925, Austrian-born physicist Wolfgang Pauli (1900-1958) published the Pauli exclusion principle that states that no two electrons in an atom can simultaneously occupy the same quantum state (i.e., energy state). Paulis specification of spin (+1/2 or 1/2) on an electron gave the two electrons in any suborbital differing quantum numbers (a system used to describe the quantum state) and made completely understandable the structure of the periodic table in terms of electron configurations (i.e., the energy related arrangement of electrons in energy shells and suborbitals). In 1931, American chemist Linus Pauling (1901-1994)published a paper that used quantum mechanics to explain how two electrons, from two different atoms, are shared to make a covalent bond between the two atoms. Paulings work provided the connection needed in order to fully apply the new quantum theory to chemical reactions.

Quantum mechanics posed profound questions for scientists and philosophers. The concept that particles such as electrons making quantum leaps from one orbit to another, as opposed to simply moving between orbits, seems counter-intuitive, that is, outside the human experience with nature. Like much of quantum theory, the proofs of how nature works at the atomic level are mathematical. Bohr himself remarked, Anyone who is not shocked by quantum theory has not understood it.

Quantum results

Quantum mechanics requires advanced mathematics to give numerical predictions for the outcome of measurements. However, one can understand many significant results of the theory from the basic properties of the probability waves. An important example is the behavior of electrons within atoms. Since such electrons are confined in some manner, scientists expect that they must be represented by standing waves that correspond to a set of allowed frequencies. Quantum mechanics states that for this new type of wave, its frequency is proportional to the energy associated with the microscopic particle. Thus, one reaches the conclusion that electrons within atoms can only exist in certain states, each of which corresponds to only one possible amount of energy. The energy of an electron in an atom is an example of an observable which is quantized, that is it comes in certain allowed amounts, called quanta (like quantities).

When an atom contains more than one electron, quantum mechanics predicts that two of the electrons both exist in the state with the lowest energy, called the ground state. The next eight electrons are in the state of the next highest energy, and so on following a specific relationship. This is the origin of the idea of electron shells, or orbits, although these are just convenient ways of talking about the states. The first shell is filled by two electrons, the second shell is filled by another eight, etc. This explains why some atoms try to combine with other atoms in chemical reactions.

This idea of electron states also explains why different atoms emit different colors of light when they are heated. Heating an object gives extra energy to the atoms inside it and this can transform an electron within an atom from one state to another of higher energy. The atom eventually loses the energy when the electron transforms back to the lower-energy state. Usually the extra energy is carried away in the form of light which is said was produced by the electron making a transition, or a change of its state. The difference in energy between the two states of the electron (before and after the transition) is the same for all atoms of the same kind. Thus, those atoms will always give off a wavelength and frequency of light (i.e.,

KEY TERMS

Classical mechanics A collection of theories, all derived from a few basic principles, that can be used to describe the motion of macroscopic objects.

Macroscopic This term describes large-scale objects like those humans directly interact with on an everyday basis.

Microscopic This term describes extremely small-scale objects such as electrons and atoms with which humans seldom interact on an individual basis as humans do with macroscopic objects.

Observable A physical quantity, like position, velocity or energy, which can be determined by a measurement.

Plancks constant A constant written as h, which was introduced by Max Planck in his quantum theory and that appears in every formula of quantum mechanics.

Probability The likelihood that a certain event will occur. If something happens half of the time, its probability is 1/2 = 0.5 = 50%.

Quantum The amount of radiant energy in the different orbits of an electron around the nucleus of an atom.

Wave A motion, inwhichenergy and momentum is carried away from some source, which repeats itself in space and time with little or no change.

color) that corresponds to that energy. Another elements atomic structure contains electron states with different energies (since the electron is confined differently) and so the differing energy levels produce light in other regions of the electromagnetic spectrum. Using this principle, scientists can determine which elements are present in stars by measuring the exact colors in the emitted light.

Quantum mechanics theory has been extremely successful in explaining a wide range of phenomena, including a description of how electrons move in materials (e.g., through chips in a personal computer). Quantum mechanics is also used to understand superconductivity, the decay of nuclei, and how lasers work.

Theoretical implications of quantum mechanics

The standard model of quantum physics offers an theoretically and mathematically sound model of particle behavior that serves as an empirically validated middle-ground between the need for undiscovered hidden variables that determine particle behavior, and a mystical anthropocentric universe where it is the observations of humans that determine reality. Although the implications of the latter can be easily dismissed, the debate over the existence of hidden variables in quantum theory remained a subject of serious scientific debate during the twentieth century and, now, early into the twenty-first century. Based upon everyday experience, well explained by the deterministic concepts of classical physics, it is intuitive that there be hidden variables to determine quantum states. Nature is not, however, obliged to act in accord with what is convenient or easy to understand. Although the existence and understanding of heretofore hidden variables might seemingly explain Einsteins spooky forces, the existence of such variables would simply provide the need to determine whether they, too, included their own hidden variables.

Quantum theory breaks this never-ending chain of causality by asserting (with substantial empirical evidence) that there are no hidden variables. Moreover, quantum theory replaces the need for a deterministic evaluation of natural phenomena with an understanding of particles and particle behavior based upon statistical probabilities. Although some philosophers and metaphysicists would like to keep the hidden variable argument alive, the experimental evidence is persuasive, compelling, and conclusive that such hidden variables do not exist.

See also Quantum number.

Resources

BOOKS

Bohr, Niels. The Unity of Knowledge. New York: Doubleday & Co., 1955.

Duck, Ian. 100 Years of Plancks Quantum. Singapore and River Edge, NJ: World Scientific, 2001.

Feynman, Richard P. QED: The Strange Theory of Light and Matter. New Jersey: Princeton University Press, 1985.

_____. The Character of Physical Law. MIT Press, 1985.

Huang, Fannie, ed. Quantum Physics: An Anthology of Current Thought. New York: Rosen Publishing Group, 2006.

Lewin, Roger. Making Waves: Irving Darkik and his Superwave Principle. Emmaus, PA: Rodale, 2005.

Liboff, Richard L. Introductory Quantum Mechanics, 4th ed. Addison-Wesley Publishing, 2002.

Mehra, Jagdish. The Golden Age of Theoretical Physics. Singapore and River Edge, NJ: World Scientific, 2000.

Phillips, A.C. Introduction to Quantum Mechanics. New York: John Wiley & Sons, 2003.

K. Lee Lerner

Quantum Mechanics

views updated May 21 2018

Quantum mechanics

Quantum mechanics is the theory used to provide an understanding of the behavior of microscopic particles such as electrons and atoms . More importantly, quantum mechanics describes the relationships between energy and matter on atomic and subatomic scale.

At the beginning of the twentieth century, German physicist Maxwell Planck (1858–1947) proposed that atoms absorb or emit electromagnetic radiation in bundles of energy termed quanta. This quantum concept seemed counter-intuitive to well-established Newtonian physics . Ultimately, advancements associated with quantum mechanics (e.g., the uncertainty principle) also had profound implications with regard to the philosophical scientific arguments regarding the limitations of human knowledge.

Planck proposed that atoms absorb or emit electro-magnetic radiation in defined and discrete units (quanta). Planck's quantum theory also asserted that the energy of light was directly proportional to its frequency, and this proved a powerful observation that accounted for a wide range of physical phenomena.

Planck's constant relates the energy of a photon with the frequency of light. Along with constant for the speed of light, Planck's constant (h = 6.626 10–34 Joule-second) is a fundamental constant of nature.

Prior to Planck's work, electromagnetic radiation (light) was thought to travel in waves with an infinite number of available frequencies and wavelengths. Planck's work focused on attempting to explain the limited spectrum of light emitted by hot objects and to explain the absence of what was termed the "violet catastrophe" predicted by 19th century theories developed by Prussian physicist Wilhelm Wien (1864–1928) and English physicist Baron (John William Strutt) Rayleigh (1842–1919).

Danish physicist Niels Bohr (1885–1962) studied Planck's quantum theory of radiation and worked in England with physicists J. J. Thomson (1856–1940), and Ernest Rutherford (1871–1937) improving their classical models of the atom by incorporating quantum theory. During this time Bohr developed his model of atomic structure. To account for the observed properties of hydrogen , Bohr proposed that electrons existed only in certain orbits and that, instead of traveling between orbits, electrons made instantaneous quantum leaps or jumps between allowed orbits. According to the Bohr model , when an electron is excited by energy it jumps from its ground state to an excited state (i.e., a higher energy orbital). The excited atom can then emit energy only in certain (quantized) amounts as its electrons jump back to lower energy orbits located closer to the nucleus. This excess energy is emitted in quanta of electromagnetic radiation (photons of light) that have exactly same energy as the difference in energy between the orbits jumped by the electron.

The electron quantum leaps between orbits proposed by the Bohr model accounted for Plank's observations that atoms emit or absorb electromagnetic radiation in quanta. Bohr's model also explained many important properties of the photoelectric effect described by Albert Einstein (1879–1955).

Using probability theory , and allowing for a wave-particle duality, quantum mechanics also replaced classical mechanics as the method by which to describe interactions between subatomic particles . Quantum mechanics replaced electron "orbitals" of classical atomic models with allowable values for angular momentum (angularvelocity multiplied by mass ) and depicted electrons position in terms of probability "clouds" and regions.

In the 1920s, the concept of quantization and its application to physical phenomena was further advanced by more mathematically complex models based on the work of the French physicist Louis Victor de Broglie (1892–1987) and Austrian physicist Erwin Schrödinger (1887–1961) that depicted the particle and wave nature of electrons. De Broglie showed that the electron was not merely a particle but a wave form. This proposal led Schrodinger to publish his wave equation in 1926. Schrödinger's work described electrons as "standing wave" surrounding the nucleus and his system of quantum mechanics is called wave mechanics. German physicist Max Born (1882–1970) and English physicist P.A.M Dirac (1902–1984) made further advances in defining the subatomic particles (principally the electron) as a wave rather than as a particle and in reconciling portions of quantum theory with relativity theory.

Working at about the same time, German physicist Werner Heisenberg (1901–1976) formulated the first complete and self-consistent theory of quantum mechanics. Matrix mathematics was well-established by the 1920s, and Heisenberg applied this powerful tool to quantum mechanics. In 1926, Heisenberg put forward his uncertainty principle that states that two complementary properties of a system, such as position and momentum, can never both be known exactly. This proposition helped cement the dual nature of particles (e.g., light can be described as having both wave and a particle characteristics). Electromagnetic radiation (one region of the spectrum of which comprises visible light) is now understood as having both particle and wave-like properties.

In 1925, Austrian-born physicist Wolfgang Pauli (1900–1958) published the Pauli exclusion principle that states that no two electrons in an atom can simultaneously occupy the same quantum state (i.e., energy state). Pauli's specification of spin (+1/2 or −1/2) on an electron gave the two electrons in any suborbital differing quantum numbers (a system used to describe the quantum state) and made completely understandable the structure of the periodic table in terms of electron configurations (i.e., the energy related arrangement of electrons in energy shells and suborbitals). In 1931, American chemist Linus Pauling published a paper that used quantum mechanics to explain how two electrons, from two different atoms, are shared to make a covalent bond between the two atoms. Pauling's work provided the connection needed in order to fully apply the new quantum theory to chemical reactions .

Quantum mechanics posed profound questions for scientists and philosophers. The concept that particles such as electrons making quantum leaps from one orbit to another, as opposed to simply moving between orbits, seems counter-intuitive, that is, outside the human experience with nature. Like much of quantum theory, the proofs of how nature works at the atomic level are mathematical. Bohr himself remarked, "Anyone who is not shocked by quantum theory has not understood it."


Quantum results

Quantum mechanics requires advanced mathematics to give numerical predictions for the outcome of measurements. However, one can understand many significant results of the theory from the basic properties of the probability waves. An important example is the behavior of electrons within atoms. Since such electrons are confined in some manner, we expect that they must be represented by standing waves that correspond to a set of allowed frequencies. Quantum mechanics states that for this new type of wave, its frequency is proportional to the energy associated with the microscopic particle. Thus, we reach the conclusion that electrons within atoms can only exist in certain states, each of which corresponds to only one possible amount of energy. The energy of an electron in an atom is an example of an observable which is quantized, that is it comes in certain allowed amounts, called quanta (like quantities).

When an atom contains more than one electron, quantum mechanics predicts that two of the electrons both exist in the state with the lowest energy, called the ground state. The next eight electrons are in the state of the next highest energy, and so on following a specific relationship. This is the origin of the idea of electron "shells" or "orbits," although these are just convenient ways of talking about the states. The first shell is "filled" by two electrons, the second shell is filled by another eight, etc. This explains why some atoms try to combine with other atoms in chemical reactions.

This idea of electron states also explains why different atoms emit different colors of light when they are heated. Heating an object gives extra energy to the atoms inside it and this can transform an electron within an atom from one state to another of higher energy. The atom eventually loses the energy when the electron transforms back to the lower-energy state. Usually the extra energy is carried away in the form of light which we say was produced by the electron making a transition, or a change of its state. The difference in energy between the two states of the electron (before and after the transition) is the same for all atoms of the same kind. Thus, those atoms will always give off a wavelength and frequency of light (i.e., color ) that corresponds to that energy. Another element's atomic structure contains electron states with different energies (since the electron is confined differently) and so the differing energy levels produce light in other regions of the electromagnetic spectrum . Using this principle, scientists can determine which elements are present in stars by measuring the exact colors in the emitted light.

Quantum mechanics theory has been extremely successful in explaining a wide range of phenomena, including a description of how electrons move in materials (e.g., through chips in a personal computer). Quantum mechanics is also used to understand superconductivity, the decay of nuclei, and how lasers work.


Theoretical implications of quantum mechanics

The standard model of quantum physics offers an theoretically and mathematically sound model of particle behavior that serves as an empirically validated middle-ground between the need for undiscovered hidden variables that determine particle behavior, and a mystical anthropocentric universe where it is the observations of humans that determine reality. Although the implications of the latter can be easily dismissed, the debate over the existence of hidden variables in quantum theory remained a subject of serious scientific debate during the twentieth century. Based upon our everyday experience, well explained by the deterministic concepts of classical physics, it is intuitive that there be hidden variables to determine quantum states. Nature is not, however, obliged to act in accord with what is convenient or easy to understand. Although the existence and understanding of heretofore hidden variables might seemingly explain Albert Einstein's "spooky" forces, the existence of such variables would simply provide the need to determine whether they, too, included their own hidden variables.

Quantum theory breaks this never-ending chain of causality by asserting (with substantial empirical evidence) that there are no hidden variables. Moreover, quantum theory replaces the need for a deterministic evaluation of natural phenomena with an understanding of particles and particle behavior based upon statistical probabilities. Although some philosophers and metaphysicists would like to keep the hidden variable argument alive, the experimental evidence is persuasive, compelling, and conclusive that such hidden variables do not exist.

See also Quantum number.


Resources

books

Albert, A. Z. Quantum Mechanics and Experience. Cambridge, MA: Harvard University Press, 1992.

Bohr, Niels. The Unity of Knowledge. New York: Doubleday & Co., 1955.

Feynman, Richard P. QED: The Strange Theory of Light and Matter. New Jersey: Princeton University Press, 1985.

Feynman, Richard P. The Character of Physical Law. MIT Press, 1985.

Gregory, B. Inventing Reality: Physics as Language. New York: John Wiley & Sons, 1990.

Han, M.Y. The Probable Universe. Blue Ridge Summit, PA: TAB Books, 1993.

Liboff, Richard L. Introductory Quantum Mechanics. 4th ed. Addison-Wesley Publishing, 2002.

Phillips, A.C. Introduction to Quantum Mechanics. New York: John Wiley & Sons, 2003.


K. Lee Lerner

KEY TERMS


. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Classical mechanics

—A collection of theories, all derived from a few basic principles, that can be used to describe the motion of macroscopic objects.

Macroscopic

—This term describes large-scale objects like those we directly interact with on an everyday basis.

Microscopic

—This term describes extremely small-scale objects such as electrons and atoms with which we seldom interact on an individual basis as we do with macroscopic objects.

Observable

—A physical quantity, like position, velocity or energy, which can be determined by a measurement.

Planck's constant

—A constant written as h which was introduced by Max Planck in his quantum theory and which appears in every formula of quantum mechanics.

Probability

—The likelihood that a certain event will occur. If something happens half of the time, its probability is 1/2 = 0.5 = 50%.

Quantum

—The amount of radiant energy in the different orbits of an electron around the nucleus of an atom.

Wave

—A motion, in which energy and momentum is carried away from some source, which repeats itself in space and time with little or no change.

Quantum Mechanics

views updated May 21 2018

QUANTUM MECHANICS

What is quantum mechanics? An answer to this question can be found by contrasting quantum and classical mechanics. Classical mechanics is a frame-work—a set of rules—used to describe the behavior of ordinary-sized things: footballs, specks of dust, planets. Classical mechanics is familiar to everyone through commonplace activities like tossing balls, driving cars, and chewing food. Physicists have studied classical mechanics for centuries (whence the name "classical") and developed elaborate mathematical tools to make accurate predictions involving complex situations: situations like the motion of satellites, the twist of a spinning top, or the jiggle of jello. Sometimes (as in the spinning top) the results of classical mechanics are unexpected, but always the setting is familiar due to one's daily interaction with ordinary-sized things.

Quantum mechanics is a parallel framework used to describe the behavior of very small things: atoms, electrons, quarks. When physicists began exploring the atomic realm (starting around 1890), the obvious thought was to apply the familiar classical framework to the new atomic situation. This resulted in disaster; the classical mechanics that had worked so well in so many other situations failed spectacularly when applied to atomic-sized situations. The obvious need to find a new framework remained the central problem of physics until 1925, when that new framework—the framework of quantum mechanics—was discovered by Werner Heisenberg. Quantum mechanics does not involve familiar things, so it is not surprising that both the results and the setting are often contrary to anything that we would have expected from everyday experience. Quantum mechanics is not merely unfamiliar; it is counterintuitive.

The fact that quantum mechanics is counterintuitive does not mean that it is unsuccessful. On the contrary, quantum mechanics is the most remarkably successful product of the human mind, the brightest jewel in our intellectual crown. To cite just one example, quantum mechanics predicts that an electron behaves in some ways like a tiny bar magnet. The strength of that magnet can be measured with high accuracy and is found to be, in certain units,

1.001 159 652 188

with a measurement uncertainty of about four in the last digit. The strength of the electron's magnet can also be predicted theoretically through quantum mechanics. The predicted strength is

1.001 159 652 153

with about seven times as much uncertainty. The agreement between experiment and quantum theory is magnificent: if I could measure the distance from New York to Los Angeles to this accuracy, my measurement would be accurate to within the thickness of a silken strand.

So, what does this unfamiliar quantum mechanical framework look like? Why did it take thirty-five years of intense effort to discover? The framework has four pillars: quantization, probability, interference, and entanglement.

Quantization

A classical marble rolling within a bowl can have any energy at all (as long as it's greater than or equal to the minimum energy of a stationary marble resting at the bottom of the bowl). The faster the marble moves, the more energy it has, and that energy can be increased or decreased by any amount, whether large or small. But a quantal electron moving within a bowl can have only certain specified amounts of energy. The electron's energy can again be increased or decreased, but the electron cannot accept just any arbitrary amount of energy: it can only absorb or emit energy in certain discrete lumps. If an attempt is made to increase its energy by less than the minimum lump, it will not accept any energy at all. If an attempt is made to increase its energy by two and a half lumps, the electron will ac cept only two of them. If an attempt is made to decrease its energy by four and two-thirds lumps, it will give up only four. This phenomena is an example of quantization, a word derived from the Latin quantus, meaning "how much."

Quantization was the first of the four pillars to be uncovered, and it gave its name to the topic, but today quantization is not regarded as the most essential characteristic of quantum mechanics. There are atomic quantities, like momentum, that do not come in lumps, and under certain circumstances even energy doesn't come in lumps.

Furthermore, quantization of a different sort exists even within the classical domain. For example, a single organ pipe cannot produce any tone but only those tones for which it is tuned.

Probability

Suppose a gun is clamped in a certain position with a certain launch angle. A bullet is shot from this gun, and a mark is made where the bullet lands. Then a second, identical, bullet is shot from the gun at the same position and angle. The bullet leaves the muzzle with the same speed. And the bullet lands exactly where the first bullet landed. This unsurprising fact is called determinism: identical initial conditions always lead to identical results, so the results are determined by the initial conditions. Indeed, using the tools of classical mechanics one can, if given sufficient information about the system as it exists, predict exactly how it will behave in the future.

It often happens that this prediction is very hard to execute or that it is very hard to find sufficient information about the system as it currently exists, so that an exact prediction is not always a practical possibility—for example, predicting the outcome when one flips a coin or rolls a die. Nevertheless, in principle the prediction can be done even if it's so difficult that no one would ever attempt it.

But it is an experimental fact that if one shoots two electrons in sequence from a gun, each with exactly the same initial condition, those two electrons will probably land at different locations (although there is some small chance that they will go to the same place). The atomic realm is probabilistic, not deterministic. The tools of quantum mechanics can predict probabilities with exquisite accuracy, but it cannot predict exactly what will happen because nature itself doesn't know exactly what will happen.

The second pillar of quantum mechanics is probability: Even given perfect information about the current state of the system, no one can predict exactly what the future will hold. This is indeed an important hallmark distinguishing quantum and classical mechanics, but even in the classical world probability exists as a practical matter—every casino operator and every politician relies upon it.

Interference

A gun shoots a number of electrons, one at a time, toward a metal plate punched with two holes. On the far side of the plate is a bank of detectors to determine where each electron lands. (Each electron is launched identically, so if one were launching classical bullets instead of quantal electrons, each would take an identical route to an identical place. But in quantum mechanics the several electrons, although identically launched, might end up at different places.)

First the experiment is performed with the right hole blocked. Most of the electrons strike the metal plate and never reach the detectors, but those that do make it through the single open hole end up in one of several different detectors—it's more likely that they will hit the detectors toward the left than those toward the right. Similar results hold if the left hole is blocked, except that now the rightward detectors are more likely to be hit.

What if both holes are open? It seems reasonable that an electron passing through the left hole when both holes are open should behave exactly like an electron passing through the left hole when the right hole is blocked. After all, how could such an electron possibly know whether the right hole were open or blocked? The same should be true for an electron passing through the right hole. Thus, the pattern of electron strikes with both holes open would be the sum of the pattern with the right hole blocked plus the pattern with the left hole blocked.

In fact, this is not what happens at all. The distribution of strikes breaks up into an intricate pattern with bands of intense electron bombardment separated by gaps with absolutely no strikes. There are some detectors which are struck by many electrons when the right hole is blocked, by some electrons when the left hole is blocked, but by no electrons at all when neither hole is blocked. And this is true even if at any instant only a single electron is present in the apparatus!

What went wrong with the above reasoning? In fact, the flaw is not in the reasoning but in an unstated premise. The assumption was made that an electron moving from the gun to the detector bank would pass through either the right hole or the left. This simple, common-sense premise is—and must be—wrong. The English language was invented by people who didn't understand quantum mechanics, so there is no concise yet accurate way to describe the situation using everyday language. The closest approximation is "the electron goes through both holes." In technical terms, the electron in transit is a superposition of an electron going through the right hole and an electron going through the left hole. It is hard to imagine what such an electron would look like, but the essential point is that the electron doesn't look like the classic "particle": a small, hard marble.

Entanglement

The phenomenon of entanglement is difficult to describe succinctly. It always involves two (or more) particles and usually involves the measurement of two (or more) different properties of those particles. There are circumstances in which the measurement results from one particle are correlated with the measurement results from the other particle, even though the particles may be very far away from each other. In some cases, one can prove that these correlations could not occur for any classical system, no matter how elaborate. The best experimental tests of quantum mechanics involve entanglement because it is in this way that the atomic world differs most dramatically from the everyday, classical world.

Mathematical Formalism

Quantum physics is richer and more textured than classical physics: quantal particles can, for example, interfere or become entangled, options that are simply unavailable to classical particles. For this reason the mathematics needed to describe a quantal situation is necessarily more elaborate than the mathematics needed to describe a corresponding classical situation. For example, suppose a single particle moves in three-dimensional space. The classical description of this particle requires six numbers (three for position and three for velocity). But the quantal description requires an infinite number of numbers—two numbers (a "magnitude" and a "phase") at every point in space.

Classical limit

Classical mechanics holds for ordinary-sized objects, while quantum mechanics holds for atomic-sized objects. So at exactly what size must one framework be switched for another? Fortunately, this difficulty doesn't require a resolution. The truth is that quantum mechanics holds for objects of all sizes, but that classical mechanics is a good approximation to quantum mechanics when quantum mechanics is applied to ordinary-sized objects. As an analogy, the surface of the Earth is nearly spherical, but sheet maps, not globes, are used for navigation over short distances. This "flat Earth approximation" is highly accurate for journeys of a few hundred miles but quite misleading when applied to journeys of ten thousand miles. Similarly, the "classical approximation" is highly accurate for ordinary-sized objects but not for atomic-sized objects.

The Subatomic Domain

When scientists first investigated the atomic realm, they found that a new physical framework (namely quantum mechanics) was needed. What about the even smaller domain of elementary particle physics? The surprising answer is that, as far as is known, the quantum framework holds in this domain as well. As physicists have explored smaller and smaller objects (first atoms, then nuclei, then neutrons, then quarks), surprises were encountered and new rules were discovered—rules with names like quantum electrodynamics and quantum chromodynamics. But these new rules have always fit comfortably within the framework of quantum mechanics.

See also:Quantum Chromodynamics; Quantum Electrodynamics; Quantum Field Theory; Quantum Tunneling; Virtual Processes

Bibliography:

Feynman, R. QED: The Strange Theory of Light and Matter (Princeton University Press, Princeton, New Jersey, 1985).

Milburn, G. J. Schrödinger's Machines: The Quantum Technology Reshaping Everyday Life (W.H. Freeman, New York, 1997).

Styer, D. F. The Strange World of Quantum Mechanics (Cambridge University Press, Cambridge, UK, 2000).

Treiman, S. The Odd Quantum (Princeton University Press, Princeton, New Jersey, 1999).

Daniel F. Styer

Quantum Mechanics

views updated Jun 11 2018

QUANTUM MECHANICS.

BIBLIOGRAPHY

Quantum mechanics, which is primarily concerned with the structures and activities of subatomic, atomic, and molecular entities, had a European provenance, and its story is in some ways as strange as the ideas it espouses. Although the German physicist Max Planck (1858–1947) is often credited with originating quantum theory, and although this theory's fundamental constant, which ushered in the disjunction between macroscopic and quantum realms, is named in his honor, it was the German Swiss physicist Albert Einstein (1879–1955) who really grasped the revolutionary consequences of Planck's quantum as a discrete quantity of electromagnetic radiation (later named the photon). Ironically, Einstein would later distance himself from the mainstream interpretation of quantum mechanics.

The Danish physicist Niels Bohr (1885–1962), by combining the nuclear model of the atom with quantum ideas, developed an enlightening explanation of the radiative regularities of the simple hydrogen atom, but the paradoxes of his theory (for example, nonradiating electron orbits) and its failure to make sense of more complex atoms led to a new quantum theory, which, in its first form of matrix mechanics, was the work of the German physicist Werner Heisenberg (1901–1976), whose arrays of numbers (matrices) represented observable properties of atomic constituents. Heisenberg's matrix model was highly mathematical, unlike the visualizable models favored by many scientists. However, in 1926 the Austrian physicist Erwin Schrödinger (1887–1961), basing his theory on a wave interpretation of the electron developed by the French physicist Louis de Broglie (1892–1987), proposed a wave mechanics in which he treated the electron in an atom not as a particle but by means of a wave function. Within a short time physicists proved that both matrix and wave mechanics gave equivalent quantum mechanical answers to basic questions about the atom.

Quantum mechanics proved extremely successful in providing physicists with detailed knowledge, confirmed by many experiments, of all the atoms in the periodic table, and it also enabled chemists to understand how atoms bond together in simple and complex compounds. Despite its successes quantum mechanics provoked controversial interpretations and philosophical conundrums. Such quantum physicists as Max Born (1882–1970) rejected the strict causality underlying Newtonian science and gave a probabilistic interpretation of Schrödinger's wave equation. Then, in 1927, Heisenberg introduced his uncertainty principle, which stated that an electron's position and velocity could not be precisely determined simultaneously. Impressed by Heisenberg's proposal, Bohr, in Copenhagen, developed an interpretation of quantum mechanics that became standard for several decades. This "Copenhagen interpretation," even though for some it was more a philosophical proposal than a scientific explanation, garnered the support of such physicists as Heisenberg, Born, and Wolfgang Pauli (1900–1958). But its unification of objects, observers, and measuring devices; its acceptance of discontinuous action; and its rejection of classical causality were unacceptable to such scientists as Einstein, Planck, and Schrödinger. To emphasize the absurdity of the Copenhagen interpretation, Schrödinger proposed a thought experiment involving a cat in a covered box containing a radioactive isotope with a fifty-fifty chance of decaying and thereby triggering the release of a poison gas. For Copenhagen interpreters, "Schrödinger's cat" remains in limbo between life and death until an observer uncovers the box; for Copenhagen critics, the idea of a cat who is somehow both alive and dead is ridiculous.

This and other quantum quandaries led some physicists to propose other interpretations of quantum mechanics. For example, David Bohm (1917–1992), an American physicist who worked in England in the period of American anticommunist hysteria associated with Senator Joseph McCarthy (1908–1957), proposed that Schrödinger's wave function described a real wave "piloting" a particle, and that the paradoxes of quantum mechanics could be explained in terms of "hidden variables" that would preserve causality. Einstein, who had been critical of the Copenhagen interpretation since its founding (he stated that "God does not play dice," and a mouse cannot change the world simply by observing it), proposed, with two collaborators, a thought experiment in which distantly separated particles could, if the Copenhagen interpretation were true, instantaneously communicate with each other when an attribute of one of them is measured. Einstein would have been surprised when, much later, this experiment was actually done and resulted in quantum nonlocal communication being verified. The paradoxes of this instantaneous "entanglement" have become largely accepted by both physicists and philosophers.

After the early achievements of quantum mechanics it was natural for physicists to attempt to unify it with the other great modern theory of physics, relativity. In the late 1920s the Swiss-born English physicist Paul Dirac (1902–1984) developed a relativistic wave equation whose significance some scholars compared to the discoveries of Newton and Einstein. The Dirac equation was not only elegant but it also successfully predicted the positive electron. Even though Dirac declared that the general theory of quantum mechanics was "almost complete," the full union of quantum mechanics and general relativity had not been achieved. Einstein spent the final decades of his life searching for a way to unify his general theory of relativity and Scottish physicist James Clerk Maxwell's (1831–1879) theory of electromagnetism, and many theoreticians after Einstein have proposed ideas attempting to join together quantum mechanics, a very successful theory of the atomic world, and general relativity, a very successful theory of the cosmic world. Superstring theory is one of these "theories of everything," and its assertion that everything, from gigantic galaxies to infinitesimal quarks, can be explained by the vibrations of minuscule lines and loops of energy in ten dimensions has generated enthusiastic supporters as well as ardent critics, who maintain that the theory, though elegant, is unverifiable and unfalsifiable (and hence not even a scientific theory).

The British cosmologist Stephen Hawking (b. 1942) has brought his interpretation of quantum physics and general relativity together to deepen astronomers' understanding of black holes, regions of spacetime in which gravitational forces are so strong that not even photons can escape. Some optimists claim that the unification of quantum mechanics and general relativity has already been achieved in superstring theory, whereas pessimists claim that this quest is really attempting to reconcile the irreconcilable. As Wolfgang Pauli, Einstein's colleague at the Institute for Advanced Study, once said of his friend's search for a unified field theory: "What God has put asunder, let no man join together."

See alsoBohr, Niels; Einstein, Albert; Science.

BIBLIOGRAPHY

Al-Khalili, Jim. Quantum. London, 2003. There have been many popularizations of quantum theory, and this illustrated vade mecum by an English physicist is a good example of the genre.

Mehra, Jagdish, and Helmut Rechenberg. The Historical Development of Quantum Theory. 6 vols. New York, 1982. Some historians of science, wary of the authors' uncritical approach, have expressed reservations about the nine books of this set (some volumes have two parts), but the massive amount of scientific, historical, and biographical material collected by the authors can be helpful if used judiciously.

Penrose, Roger. The Road to Reality: A Complete Guide to the Laws of the Universe. New York, 2005. In this comprehensive mathematical and historical account of scientists' search for the basic laws underlying the universe, an important theme is the exploration of the compatibility of relativity and quantum mechanics.

Robert J. Paradowski

Quantum Mechanics

views updated Jun 27 2018

Quantum mechanics

Quantum mechanics is a method of studying the natural world based on the concept that waves of energy also have certain properties normally associated with matter, and that matter sometimes has properties that we usually associate with energy. For example, physicists normally talk about light as if it were some form of wave traveling through space. Many properties of lightsuch as reflection and refractioncan be understood if we think of light as waves bouncing off an object or passing through the object.

But some optical (light) phenomena cannot be explained by thinking of light as if it traveled in waves. One can only understand these phenomena by imagining tiny discrete particles of light somewhat similar to atoms. These tiny particles of light are known as photons. Photons are often described as quanta (the plural of quantum) of light. The term quantum comes from the Latin word for "how much." A quantum, or photon, of light, then, tells how much light energy there is in a "package" or "atom" of light.

The fact that waves sometimes act like matter and waves sometimes acts like waves is now known as the principle of duality. The term duality means that many phenomena have two different faces, depending on the circumstances in which they are being studied.

Macroscopic and submicroscopic properties

Until the 1920s, physicists thought they understood the macroscopic properties of nature rather well. The term macroscopic refers to properties that can be observed with the five human senses, aided or unaided. For example, the path followed by a bullet as it travels through the air can be described very accurately using only the laws of classical physics, the kind of physics originally developed by Italian scientist Galileo Galilei (15641642) and English physicist Isaac Newton (16421727).

But the methods of classical physics do not work nearly as welland sometimes they don't work at allwhen problems at the submicroscopic level are studied. The submicroscopic level involves objects and events that are too small to be seen even with the very best microscopes. The movement of an electron in an atom is an example of a submicroscopic phenomenon.

Words to Know

Classical mechanics: A collection of theories and laws that was developed early in the history of physics and that can be used to describe the motion of most macroscopic objects.

Macroscopic: A term describing objects and events that can be observed with the five human senses, aided or unaided.

Photon: A unit of energy.

Quantum: A discrete amount of any form of energy.

Wave: A disturbance in a medium that carries energy from one place to another.

In the first two decades of the twentieth century, physicists found that the old, familiar tools of classical physics produced peculiar answers or no answers at all in dealing with submicroscopic phenomena. As a result, they developed an entirely new way of thinking about and dealing with problems on the atomic level.

Uncertainty principle

Some of the concepts involved in quantum mechanics are very surprising, and they often run counter to our common sense. One of these is another revolutionary concept in physicsthe uncertainty principle. In 1927, German physicist Werner Heisenberg (19011976) made a remarkable discovery about the path taken by an electron in an atom. In the macroscopic world, we always see objects by shining light on them. Why not shine light on the electron so that its movement could be seen?

But the submicroscopic world presents new problems, Heisenberg said. The electron is so small that the simple act of shining light on it will knock it out of its normal path. What a scientist would see, then, is not the electron as it really exists in an atom but as it exists when moved by a light shining on it. In general, Heisenberg went on, the very act of measuring very small objects changes the objects. What we see is not what they are but what they have become as a result of looking at them. Heisenberg called his theory the uncertainty principle. The term means that one can never be sure as to the state of affairs for any object or event at the submicroscopic level.

A new physics

Both the principle of duality and the uncertainty principle shook the foundations of physics. Concepts such as Newton's laws of motion still held true for events at the macroscopic level, but they were essentially worthless in dealing with submicroscopic phenomena. As a result, physicists essentially had to start over in thinking about the ways they studied nature. Many new techniques and methods were developed to deal with the problems of the submicroscopic world. Those techniques and methods are what we think of today as quantum physics or quantum mechanics.

[See also Light; Subatomic particles ]

quantum mechanics

views updated May 14 2018

quantum mechanics Branch of physics that uses the quantum theory to explain the behaviour of elementary particles. According to quantum theory, all radiant energy emits and absorbs in multiples of tiny ‘packets’ or quanta. Atomic particles have wavelike properties and thereby exhibit a wave-particle duality. Sometimes the wave properties dominate, and other times the particle aspects dominate. The quantum theory uses four quantum numbers to classify electrons and their atomic states: energy level, angular momentum, energy in a magnetic field and spin. The exclusion principle says any two electrons in an atom cannot have the same energy and spin. A change in an electron, atom or molecule from one quantum state to another, called a quantum jump, is accompanied by the absorption or emission of a quantum. The quantum field theory seeks to explain this exchange. The strong interactions between quarks and between protons and neutrons are described by quantum chronodynamics. The idea that energy radiates and absorbs in packets was first proposed by German theoretical physicist Max Planck in 1900 to explain black body radiation. Using Planck's work, German-born US physicist Albert Einstein quantized light radiation, and in 1905 explained the photoelectric effect. He chose the name of photon for a quantum of light energy. In 1913, Danish physicist Niels Bohr used quantum theory to explain atomic structure and atomic spectra, showing the relationship between the energy levels of an atom's electrons and the frequencies of radiation emitted or absorbed by the atom. In 1924, French physicist Louis de Broglie suggested that particles have wave properties, the converse having been postulated in 1905 by Albert Einstein. In 1926, Austrian physicist Erwin Schrödinger used this hypothesis of wave mechanics to predict particle behaviour on the basis of wave properties, but a year earlier German physicist Werner Heisenberg had produced a mathematical equivalent to Schrödinger's theory without using wave concepts at all. In 1928, English physicist Paul Dirac unified these approaches while incorporating relativity into quantum mechanics (especially when large speeds are involved). This predicted the existence of antimatter and helped develop the quantum electrodynamics theory of how charged subatomic particles interact within electric and magnetic fields. The superstring theory provides a possible answer to gravitational interaction. The complete, modern theory of quantum mechanics is the quantum field theory of quantum electrodynamics, also known as the quantum theory of light. It was derived by US theoretical physicist Richard Feynman in the 1940s. The theory predicts that a collision between an electron and a proton should result in the production of a photon of electromagnetic radiation, which is exchanged between the colliding particles. Quantum mechanics remains a difficult system because the uncertainty principle, formulated in 1927 by Heisenberg, states that nothing on the atomic scale can be measured or observed without disturbing it. This makes it impossible to know the position and momentum of a particle at the same time.

quantum mechanics

views updated May 14 2018

quan·tum me·chan·ics • pl. n. [treated as sing.] Physics the branch of mechanics that deals with the mathematical description of the motion and interaction of subatomic particles, incorporating the concepts of quantization of energy, wave-particle duality, the uncertainty principle, and the correspondence principle.DERIVATIVES: quan·tum-me·chan·i·cal adj.

Quantum Mechanics

views updated Jun 08 2018

Quantum Mechanics

See Physics, Quantum

More From encyclopedia.com