Certainty from uncertainty
Charles H Bennett
mechanics is best known for its uncertainty principle, but now physicists
and computer scientists have found that quantum mechanics can also serve
as a source of certainty. Quantum effects within a computer can be used,
in principle, to solve quickly and with complete certainty some kinds of
problem that a classical computer could only solve very slowly , or else
with a small chance of error (l-3).
The extra power of quantum computers arises from the ability of a quantum
system, when it is not being observed, to be in many places at the same time.
A single photon making its way through a snowball, for example, simultaneously
follows all possible optical paths through the snowball, and emerges
in a way that is determined by the constructive and destructive interference
among all the paths.
A single quantum computer could similarly, in principle, follow many distinct
computation paths all at the same time and produce a final output depending
on the interference of all of them. Although a number of obstacles (4) appear
to stand in the way of achieving controllable interference among computation
paths in a practical quantum computer, there has been much recent progress
in understanding how - in principle, the possibility of such interference
makes the theory of quantum computation differ from its somewhat better
understood classical counterpart.
An early but probably vain hope was that quantum parallelism might
provide a fast way of solving problems such as factoring or the
travelling-salesman problem, which appear
to be hard in the same way as finding a needle in a haystack is hard, that
is because they involve a search for a successful solution among exponentially
many candidates. A computer able to test all the candidates in parallel,
and signal unambiguously if it found one that worked, would solve these problems
exponentially faster than known methods.
It is easy to program a quantum computer to branch out into exponentially
many computation paths ; the difficult part is getting the paths to interfere
in a useful way at the end, so that the answer comes out with a
non-negligible probability. This difficulty is illustrated by the factoring
problem above. Suppose the quantum computer is programmed to factor a 100-digit
number by trying in parallel to divide it by all numbers of fifty digits
or fewer. If any of these approximately 1050 computations yields
a zero remainder , it will in a sense have solved the problem. But if there
is only one successful path, the interference pattern among all the paths,
which determines the behaviour of the computer as a whole, will scarcely
be affected. Quantum computers cannot amplify an answer found on a single
computation to a detectable level because interference is an additive process,
to which each path contributes only as much weight as it started out with.
Therefore, a quantum computer's chance of quickly solving a problem
of this type is no greater than that of a classical stochastic computer,
which can choose a single computation path randomly. For interference to
achieve anything useful, it must be applied to problems whose solution depends
on many computation paths.
The recent spate of progress in quantum computation stems from a paper
by David Deutsch of
Oxford University and Richard Josza, now at Universite de Montreal. The authors
use a quantum computer to determine global properties of a boolean function
f of an n-bit argument, by having the computer branch into
2n computation paths, one for each value of the argument x.
They show that the computer can then cause the paths to interfere in such
a way as to determine, quickly and with no chance of error , whether the
function f is "unanimous" (f(x)=0 or
f(x)=1 for all x) or "balanced"
(f(x)=0 for exactly half the x values and
f(x)= 1 for the other half).
A classical computer, by contrast, limited to testing arguments one
at a time, might find unanimity among all the f(x) it had tested
so far , but would still not be able absolutely to rule out the "balanced"
possibility until it had tested more than half the paths, a job requiring
an exponentially increasing amount of time. If we are unwilling to tolerate
any chance of error , the problem of distinguishing balanced from unanimous
functions can be solved quickly on a quantum computer but not on a classical
one. On the other hand, if even a tiny chance of error is allowed, a classical
stochastic computer could also solve the problem quickly , by testing a few
dozen x values at random, then declaring the function to be unanimous
if they all agreed.
In the figure, the top left bar shows a unanimous function and the
bar below it a balanced function for a toy example , with n=7 and 128
x values. Although the global difference between these two bars is
obvious to the eye, a classical computer, limited to testing points one at
a time, would need to test over half the x values to be sure the upper
bar was unanimous. A single quantum computer, by contrast , could quickly
and surely detect the global difference by pursuing 128 computation paths
in parallel and , causing them to interfere.
Ethan Bernstein and Umesh Vazirani at the University of California
at Berkeley have strengthened Deutsch and Jozsa's result (2) by showing that
the same quantum computer that distinguishes unanimous from balanced functions
can also (and still with no chance of error) distinguish among 2n
- 1 kinds of balanced functions. The bars on the right of the figure show
two of these functions in the toy example : a quantum computer could distinguish
these functions from each other and from 125 other balanced functions, most
of which would look confusingly similar to the naked eye. The class of balanced
functions distinguishable by quantum computation (the so-called Walsh functions)
are of independent mathematical interest, being boolean analogues of the
basis functions used in Fourier analysis.
What, then, is the relation between the powers of quantum and classical
computation? As is well known, classical computational complexity theory
suffers from a humiliating inability to answer some of its most basic questions,
such as whether the two complexity classes - P, for deterministic polynomial
time, and NP , for nondeterministic polynomial time - are equal. In consequence,
the factoring and travelling-salesman problems alluded to earlier are merely
thought, not known, to be hard. This frustrating situation arises because,
in a real problem,
A quantum computer can quickly and with certainty distinguish
a unanimous function (top, left) from a balanced function (for example, bottom.
left). It can also distinguish between a large number of subtly different
balanced functions, such as those on the right.
alternative computation paths are not independent - for example,
in factoring, if a number is divisible by 6 then it will certainly be divisible
by 3. Thus one cannot be sure that there is not a much faster way of solving
the problem than by blind search: either of these problems might turn out
to be more like finding a needle in a department store than finding one in
Unable to answer their biggest questions, complexity theorists have
retreated to proving two lesser kinds of theorems: oracle results and
completeness results. Oracle results concern the power of computers that
can ask questions of a black box without being allowed to look inside it.
The f functions considered above are an example of an oracle. Because
Deutsch and Jozsa's proof does not allow the computer to "open the box" and
examine the instructions for calculating f, it is not an absolute
proof that quantum computers are more powerful than classical ones, merely
a proof that they are more powerful if certain kinds of f functions
exist - functions that are balanced or unanimous, but whose instructions
cannot easily be analysed to determine which. Gilles Brassard of Montreal
and Andre Berthiaume (3), now at Oxford, as well as Bernstein and Vazirani
(2) have proved a number of oracle results based on Deutsch and Jozsa's
construction which characterize in considerable detail the power of
oracle-assisted quantum computers relative to their oracle assisted classical
analogues. The braves of these results is a family of oracle problems which
are easy for quantum computers but not for classical ones even when the latter
are allowed to make errors.
The other approach, the completeness approach, eschews oracles and
is based instead on finding problems that, although not known to be hard
, can be proved to be at least as hard as any other problem in a given class.
Thus, in classical complexity theory , the travelling- salesman problem is
called NP-complete because all other problems in the class NP can be reduced
to it. It may similarly be possible to characterize the power of quantum
computation by finding problems that are provably at least as hard as any
problem in a given quantum complexity class.
Uncertainty in quantum theory
Protestations to the contrary notwithstanding, people have
been busy bridging the gap between deterministic wave mechanics and the notion
that everything is uncertain; quantum mechanics is the better for it.
It is said that when Schrödinger devised his quantum wave equation in
the turbulent year of 1926 he believed for a little while that he had shown
that the familiar simplicities of classical mechanics had, after all, been
restored. And that would be understandable. For the Schrödinger equation
is a perfectly deterministic equation exactly comparable to the equation
of motion of a classical mechanical system. If you know the state of a system
(represented by a wavefunction y(x, t) a function
of position and time respectively) and enough about its properties to define
the equation of motion, you can set cut to calculate the state of the same
system at any later time. One of the reasons for the general misunderstanding
of quantum mechanics (but there are several, see Nature 361, 493; 11 February
1993) is the mismatch between the determinism embodied in Schrödinger's
equation and the general opinion that quantum mechanics is all about uncertainty.
That, of course, is how it happened historically. It fell to Bohr to argue
for the statistical interpretation of the different quantities calculated
by Schrödinger on the one hand and Heisenberg on the other. Thus, having
solved a wave equation, Bohr advocated that y(x,
t) *y(x, t)should be taken as the square of the
probability distribution of the position coordinates x at time t. (The quantity
y(x, t)* represents the
complex conjugate of the regular
wave function.) The awkwardness of this arrangement is that it puts two
characteristic features of quantum mechanics into two quite different
intellectual pockets. The business of calculating the motion of a system
is as matter-of-fact as even a Lord Kelvin could ask, but then the result
must be interpreted as if the stochastic nature of microscopic physics is
a kind of afterthought. It is no wonder that over the past half century
there have been endless complaints from those who allow that Schrödinger's
equation "works", but who cannot (or, at least, could not) accept that physics
is inherently uncertain. To be fair, it is aesthetically (to put it mildly)
unsatisfactory that a theory should not embody the means by which it is to
be interpreted. Newton's laws meet that test: space and time are absolute,
and the quantities calculated from the newtonian equations of motion, positions,
velocities and the like, have an obvious interpretation. The same is true
of special relativity, except that the quantities are now relative. But what,
in 1926, could be said for a theory producing a well-defined result (a
wavefunction changing with time) which is then interpreted as a probability
with hardly better justification than the question, "What else can it mean?"
The argument was still raging in the 1950s. We forget that, then, the Soviet
view of quantum mechanics was that it was an idealist" doctrine, a term of
particular abuse in the lexicon of the times. It was also generally overlooked
that the question had for practical purposes been settled by the late Richard
Feynman, whose particular way of doing quantum mechanics incorporated uncertainty
in a successful scheme of calculation. What is the chance that a particle
will go from A to B? Why, simply list all the paths between the two points
(even those not classically allowed) and calculate the likelihood that the
particle will make each journey, making proper allowance for the phase (from
which interference phenomena spring). What Feynman had done was to take literally
Bohr's argument about Young's slits ("Through which of two slits does a photon
'go' when it contributes to an interference pattern on the other side?"),
generalizing that to accommodate the behaviour of an electromagnetic field.
What magic that it worked. Only in the past decade does the physics community
seem to have seized on the opportunities marked out by Feynman's triumph.
A review of the field last year, by Roland Omnés from Paris-Onze (Rev.
mod. Phys. 64,339; 1992)lists only a sprinkling of references from before
the 1980s, and then only for historical reasons. But he makes the point that
one of the inspiration of the present interest in what quantum mechanics
means were the modern realizations oft he Einstein Rosen - Podolski experiments
on non-locality then attempted. One of the earliest innovations in the field
(in 1984) was an attempt by Robert B. Griffiths from the Carnegie-Mellon
University in Pittsburgh to generalize Feynman's argument to the notion that,
even in quantum mechanics, a mechanical system can have a history.
The same Griffiths has now put the point more neatly
(Phys. Rev. Lett. 70,2201; 1993) by asking what may be the equivalent in
quantum mechanics of what is called "phase space"
in ordinary classical mechanics. Phase space, of course, is at its simplest
merely a space spanned by the position coordinates and the corresponding
momenta of all the particles in a system. The phase space of a single particle
moving in one dimension has simply two dimensions, one position and one momentum.
A trajectory in phase space is then a line that shows how the momentum of
a particle changes with position. A freely moving particle is represented
simply by a straight line parallel to the position axis, for example. Is
there a quantum equivalent? Griffiths's construction is a quantum construction
from the outset. All systems have
eigenstates, represented by the
solutions of Schrodinger's equation from which the time-dependent parts have
been stripped out. In general, these states are orthogonal to each other;
if they are not, they are such that it is always possible to find a finite
number of linear combinations of them which are orthogonal to each other
and to all others. So there is a kind of phase space, one with an infinite
number of dimensions in which a single direction is the equivalent of a single
point in classical phase space. So what happens when such a system, not to
begin with in an eigenstate, changes in the course of time? Come what may,
it is likely to begin as a combination of a finite number of the eigenstates
and so to be represented by a direction enclosed by the vectors that represent
its component states. And then it will move with the passage of time, in
ways that might be specified. There is nothing radically new in this. The
novelty is that the formulation allows Griffiths to make mathematically explicit
the notion that quantum systems can he tackled by considering all possible
histories leading to sum specified result, and then adding up their
probabilities. It is crucial to his argument that all hypothetical histories
must be not merely compatible with the end result, but also such that they
do not interfere with each other before they get there. Nobody will seize
upon Griffiths as a way of calculating any but the simplest quantum systems,
pairs of photons for example. The interest of his work is that it seems to
clarify what many people have been driving at for the past decade, the idea
that the probability aspects of quantum mechanics should be built into the
foundations of the subject. The implications of this development for the
tortuous and sometimes tortured theory of quantum measurement are evidently
important. Even Bohr would have been content with the way the argument has