The theory of chaos uncovers a new "uncertainty principle" which governs
how the real world behaves. It also explains why time goes in only one direction.
Peter Coveney

The nature of time is central not
only to our understanding of the world around us, including the physics of
how the Universe came into being and how it evolves, but it also affects
issues such as the relation between science, culture and human perception.
Yet scientists still do not have an easily understandable definition of
time.
The problem is that in the everyday world, time appears to
go in one directionit has an arrow. Cups of tea cool, snowmen melt and bulls
wreak havoc in china shops. We never see the reverse processes. This unrelenting
march of time is captured in thermodynamics, the science of irreversible
processes. But underpinning thermodynamics are supposedly more the fundamental
laws of the Universe laws of motion given by
Newtonian and
quantum mechanics. The equations describing these
laws do not distinguish between past and future.Time appears to be a reversible
quantity with no arrow.
So we have a conflict between irreversible
laws of thermodynamics and the reversible mechanical
laws of motion. Does the notion of an arrow of time have to be given up,
or do we need to change the fundamental dynamical laws? Today, the theory
of chaos can help with the answer.
The second law of thermodynamics is, according
to Arthur Eddington, the "supreme law of Nature". It arose from a simple
observation: in any macroscopic mechanical process, some or all of the energy
always gets dissipated as heat. Just think of rubbing your hands together.
In 1850, when Rudolf Clausius, the German physicist, first saw the farreaching
ramifications of this mundane observation, he introduced the concept of
"entropy" as a quantity that relentlessly
increases because of this heat dissipation. Because heat is the random movement
of the individual particles that make up the system, entropy has come to
be interpreted as the amount of disorder the system contains. It provides
a way of connecting the microscopic world, where Newtonian and quantum mechanics
rule, with the
macroscopic laws of thermodynamics.
For isolated systems that exchange neither energy nor matter with their
surroundings, the entropy continues to grow until it reaches its maximum
value at what is called thermodynamic equilibrium. This is the final state
of the system when there is no change in the macroscopic properties density,
pressure and so onwith time. The concept of equilibrium has proved of great
value in thermodynamics. Unfortunately, as a result, most scientists talk
about thermodynamics and entropy only in terms of equilibrium states, even
though this amounts to a major restriction, as we shall soon see.
It is rare to encounter truly isolated systems. They are more
likely to be "closed" (exchanging energy but not matter with the surroundings)
or "open" (exchanging both energy and matter). Imagine compressing a gas
in a cylinder with a piston. The gas and the cylinder constitute a closed
system, so we have to take into account the entropy changes arising from
the exchange of energy with the surroundings as well as the entropy change
within the gas.
The traditional thermodynamic approach to describing what happens
is as follows. The total entropy of the system and the surroundings will
be at a maximum for an equilibrium state. This entropy will not change as
the volume of the gas is reduced, provided that the gas and the surroundings
remain at all instants in equilibrium. This process would be reversible.
In order to achieve it, the difference between the external pressure and
that of the gas must be infinitesimally small to maintain the state of
equilibrium at every moment. In practice, of course, this socalled
"quasistatic" compression would take an eternity to perform.
The remarkable conclusion is that equilibrium thermodynamics
cannot, therefore, describe change, which is the very means by which we are
aware of time. The reason why physicists and chemists rely on equilibrium
thermodynamics so much is that it is mathematically easy to use: it produces
the quantities, such as entropy, describing the final equilibrium state of
an evolving system Entropy is a socalled thermodynamic "potential".
In reality, all processes take a finite time to happen and,
therefore, always proceed out of equilibrium. Theoretically, a system can
only aspire to reaching equilibrium, it will never actually reach it. It
is, therefore, somewhat ironic that thermodynamicists have focused their
attention on the special case of thermodynamic equilibrium. For the difference
between equilibrium and nonequilibrium is as stark as that between a journey
and its destination, or the words of this sentence and the full stop that
ends it. It is only by virtue of irreversible nonequilibrium processes that
a system reaches a state of equilibrium. Life itself is a nonequilibrium
process: ageing is irreversible. Equilibrium is reached only at death, when
a decayed corpse crumbles into dust.
Obviously, you have to use nonequilibrium thermodynamics when
dealing with systems that are prevented from reaching equilibrium by external
influences, say, where there is a continuous exchange of materials and energy
with the environment. Living systems are typical examples.
As an example of a nonequilibrium system, consider an iron rod whose ends
are initially at different temperatures. Normally, if one end is hotter than
the other, the temperature gradient along the rod would cause the hot end
to cool down and the cooler end to warm up until the rod attained a uniform
temperature. This is the equilibrium situation. If, however, we maintain
one end at a higher temperature, the rod experiences a continual thermodynamic
forcea temperature gradient causing the heat flow, or thermodynamic flux,
along the rod. The rod's entropy production is given by the product of the
force and the flux, in other words the heat flow multiplied by the temperature
gradient.
If the system is close to equilibrium, the fluxes depend in
a simple, linear way on the forces: if the force is doubled, then so is the
flux. This is linear thermodynamics, which was put on a firm footing by Lars
Onsager of Yale University during the 1930s. At equilibrium, the forces vanish
and so too do the fluxes.
Ilya
Prigogine, a theoretical physicist and physical
chemist at the University of Brussels, was the first researcher to tackle
entropy in nonequilibrium thermodynamics. In 1945, he showed that for systems
close to equilibrium, the thermodynamic potential is the rate at which entropy
is produced by the system; this is called "dissipation". Prigogine came up
with a theorem of minimum entropy production which predicts that such systems
evolve to a steady state that minimises the dissipation. This is reminiscent
of equilibrium thermodynamics; the final state is uniform in space and does
not vary with time.
Systems far from equilibrium
Prigogine's minimum entropy production theorem is an important result. Together
with his colleague Paul Glansdorff and others in Brussels, Prigogine then
set out to explore systems maintained even further away from equilibrium,
where the linear law for force and flux breaks down, to see whether it was
possible to extend his theorem into a general criterion that would work
for nonlinear,
far from equilibrium situations.
Over a period of some 20 years, the research group at Brussels
elaborated a theory widely known as "generalised thermodynamics" (a term,
in fact, never used by this group). To apply thermodynamic principles to
far from equilibrium problems, Glansdorff and Prigogine assumed that such
systems behave like a goodnatured patchwork of equilibrium systems. In this
way, entropy and other thermodynamic quantities depend as before, on variables
such as temperature and pressure.

Systems far from equilibrium can split into two stable states as in figure (a),the slightest tremor triggering many splittings as in (b) 
The GlansdorffPrigogine criterion makes a general statement
about the stability of far from equilibrium steady states. It says that
they may become unstable as they are driven further from equilibrium: there
may arise a crisis, or bifurcation point, at which
the system prefers to leave the steady state, evolving instead into some
other stable state (see Figure la).
The important new possibility is that beyond the first crisis point,
highly organised states can suddenly
appear. In some nonequilibrium chemical reactions, for example, regular
colour changes start to happen so producing "chemical clocks"; in others,
beautiful scrolls of colour arise. Such dynamical states are not associated
with minimal entropy production by the system; however, the entropy produced
is exported to the external environment.
PUSHING THE SECOND LAW TO THE LIMIT. Australian researchers
have experimentally shown that microscopic systems (a nanomachine) may
spontaneously become more orderly for short periods of time  a development
that would be tantamount to violating the second law of thermodynamics, if
it happened in a larger system. Don't worry, nature still rigorously enforces
the venerable second law in macroscopic systems, but engineers will want
to keep limits to the second law in mind when designing nanoscale machines.
The new experiment also potentially has important ramifications for an
understanding of the mechanics of life on the scale of microbes and cells.
There are numerous ways to summarize the second law of thermodynamics. One
of the simplest is to note that it's impossible simply to extract the heat
energy from some reservoir and use it to do work. Otherwise, machines could
run on the energy in a glass of water, for example, by extracting heat and
leaving behind a lump of ice. If this were possible, refrigerators and freezers
could create electrical power rather that consuming it. The second law typically
concerns collections of many trillions of particles  such as the molecules
in an iron rod, or a cup of tea, or a helium balloon  and it works well
because it is essentially a statistical statement about the collective behavior
of countless particles we could never hope to track individually. In systems
of only a few particles, the statistics are grainier, and circumstances may
arise that would be highly improbable in large systems. Therefore, the second
law of thermodynamics is not generally applied to small collections of particles.
The experiment at the Australian National University in Canberra and Griffith
University in Brisbane (Edith Sevick, sevick@rsc.anu.edu.au, 011+61261250508)
looks at aspects of thermodynamics in the hazy middle ground between very
small and very large systems. The researchers used optical tweezers to grab
hold of a micronsized bead and drag it through water. By measuring the motion
of the bead and calculating the minuscule forces on it, the researchers were
able to show that the bead was sometimes kicked by the water molecules in
such a way that energy was transferred from the water to the bead. In effect,
heat energy was extracted from the reservoir and used to do work (helping
to move the bead) in apparent violation of the second law. As it turns out,
when the bead was briefly moved over short distances, it was almost as likely
to extract energy from the water as it was to add energy to the water. But
when the bead was moved for more than about 2 seconds at a time, the second
law took over again and no useful energy could be extracted from the motion
of the water molecules, eliminating the possibility of micronsized perpetual
motion machines that run for more than a few seconds. Nevertheless, many
physicists will be surprised to learn that the second law is not entirely
valid for systems as large as the bead andwater experiment, and for periods
on the order of seconds. After all, even a cubic micron of water contains
about thirty billion molecules. While it's still not possible to do useful
work by turning water into ice, the experiment suggests that nanoscale machines
may have to deal with phenomena that are more bizarre than most engineers
realize. Such tiny devices may even end up running backwards for brief periods
due to the counterintuitive energy flow. The research may also be important
to biologists because many of the cells and microbes they study comprise
systems comparable in size to the beadandwater experiment.(G.M.Wang et
al., Physical Review Letters, 29 July 2002) 
As a result, we have to reconsider associating the arrow of
time with uniform degeneration into randomnessat least on a local level.
At the "end" of timeat equilibrium randomness may have the last laugh.
But over shorter timescales, we can witness the emergence of exquisitely
ordered structures which exist as long as the flow of matter and energy is
maintainedas illustrated by ourselves, for example.
The GlansdorffPrigogine criterion is not a universal guiding
principle of irreversible evolution because there is an enormous range of
possible behaviours available far from equilibrium. How a nonequilibrium
system evolves over time can depend very sensitively on the system's microscopic
propertiesthe motions of its constituent atoms and moleculesand not merely
on largescale parameters such as temperature and pressure. Far from equilibrium,
the smallest of fluctuations can lead to radically new behaviour on the
macroscopic scale. A myriad of bifurcations can carry the system in a
random way into new stable states (see Figure lb). These nonuniform states
of structural organisation, varying in time or space (or both), were dubbed
"dissipative structures" by Prigogine; the spontaneous development of such
structures is known as "selforganisation".
Existing thermodynamic theory cannot throw light on the behaviour
of nonequilibrium systems beyond the first bifurcation point as we leave
equilibrium behind. We can explore such states theoretically only by considering
the dynamics of the systems. To describe the oneway evolution of such non
equilibrium systems, we must construct mathematical models based on equations
that show how various observable properties of a system change with time.In
agreement with the second law of thermodynamics, such sets of equations
describing irreversible processes always contain the arrow of time.
Chemical reactions provide typical examples of how this works.
We can describe how fast such chemical reactions go as they are driven in
the direction of thermodynamic equilibrium in terms of rate laws written
in the form of differential equations. The quantities that we can measure
are the concentrations of the chemicals involved and the rates at which they
change with time.
We do not expect to see selforganising processes in every chemical
reaction maintained far from equilibrium. But we often find that the mechanism
underlying a reaction leads to differential equations that are nonlinear.
Nonlinearities arise, for example, when a certain chemical present enhances
(or suppresses) its own production; and they can generate unexpected complexity.
Indeed, such nonlinearities are necessary, but not sufficient, for selforganised
structures, including deterministic chaos, to appear. An example is the
famous
BelousovZhabotinski reaction discovered
in the 1950s, described by Stephen Scott in his recent article on chemical
chaos ( "Clocks and chaos in chemistry", New Scientist, 2 December 1989).
Today, as this series of chaos articles admirably shows, many
scientists are using nonlinear dynamics to model a dizzying range of complicated
phenomena, from fluid dynamics, through chemical and biochemical processes,
to genetic variation, heart beats, population dynamics, evolutionary theory
and even into economics. The two universal features of all these different
phenomena are their irreversibility and their nonlinearity. Deterministic
chaos is only one possible consequence; the other is a more regular
selforganisation; indeed, chaos is just a special, but very interesting,
form of selforganisation in which there is an overload of order.
We can again ask what is the origin of the irreversibility enshrined
within the second law of thermodynamics. The traditional reductionist view
is that we should seek the explanation on the basis of the reversible mechanical
equations of motion. But, as the physicist
Ludwig Boltzmann discovered, it is not
possible to base the arrow of time directly on equations that ignore it.
His failed attempt to reconcile microscopic mechanics with the second law
gave rise to the "irreversibility paradox" that I mentioned at the beginning
of this article.
The standard way of attempting to derive the equations employed
in nonequilibrium thermodynamics starts from the equations of motion, whether
classical or quantum mechanical, of the individual particles making up the
system, which might, for example, be a gas. Because we cannot know the exact
position and velocity of every particle, we have to turn to probability
theorystatistical methodsto relate the average behaviour of each particle
to the overall behaviour of the system. This is called statistical mechanics.
The approach works very well because of the exceedingly large numbers of
particles involved (of the order of 10^{24}). The reason for using
probabilistic methods is not, merely the practical difficulty of being unable
to measure the initial positions and velocities of the participating particles.
Quantum mechanics predicts these restrictions as a consequence of
Heisenberg's uncertainty principle.
But the same is also true for sufficiently unstable chaotic classical dynamical
systems. In Ian Percival's article last year ("Chaos: a science for the real
world", New Scientist, 21 October 1989), he explained that one of the
characteristic features of a chaotic system is its
sensitivity to the initial conditions: the behaviour
of systems with different initial conditions, no matter how similar, diverges
exponentially as time goes on. To predict the future, you would have
to measure the initial conditions with literally infinite precisiona
task impossible in principle as well as in practice. Again, this means
we have to rely on a probabilistic description even at the microscopic
level.
Chaotic systems are irreversible in a spectacular way, so we
would like to find an entropylike quantity associated with them because
it is entropy that measures change and furnishes the arrow of time. Theorists
have made the greatest progress in a class of dynamical systems called
ergodic systems. An ergodic system is one which will pass through
every possible dynamical state compatible with its energy. The foundations
of ergodic theory were laid down by
John von Neumann, George Birkhoff, Eberhard
Hopf and Paul Halmos during the 1930s and more recently developed by Soviet
mathematicians including Andrei Kolmogorov, Dmitrii Anosov, Vladimir Arnold
and Yasha Sinai. Their work has revealed that there is a whole hierarchy
of behaviours within dynamical systems some simple, some complex, some
paradoxically simple and complex at the same time.
As with the systems described in previous articles on chaos,
we can use "phase portraits" to show
how an ergodic system behaves. But in this case, we portray the initial state
of the system as a bundle of points in phase space, rather than a single
point. Figure 2a shows a nonergodic system: the bundle retains its shape
and moves in a periodic fashion over a limited portion of the space. Figure
2b shows an ergodic system; the bundle maintains its shape but now roves
around all parts of the space. In Figure 2c the bundle, whose volume must
remain constant, spreads out into ever finer fibres, like a drop of ink spreading
in water; eventually it invades every part of the space. This is a consequence
of what is called Liouville's theorem. In other words, the total
probability must be conserved (and add up to 1); the bundle behaves like
an incompressible fluid drop. This is an example of a "mixing ergodic flow"
, and manifests an approach to thermodynamic equilibrium when the time evolution
ceases. Such spreading out implies a form of dynamical chaos. The bundle
spreads out because all the trajectories that it contains diverge from each
other exponentially fast. Hence it can arise only for a chaotic dynamical
system.
The remarkable differences in behavior in "phase space" between a simple system (a),a socalled ergodic system (b) and a mixed ergodic system (c) which is chaotic 
Mixing flows are only one member of a hierarchy of increasingly
unstable and thus chaotic ergodic dynamical systems. Even more random are
the socalled Kflows, named after Kolmogorov. Their behaviour is at the
limit of total unpredictability: they have the remarkable property that even
an infinite number of prior measurements cannot predict the outcome of the
very next measurement. My colleague Baidyanath Misra, working in collaboration
with Prigogine, has found an entropylike quantity with the desired property
of increasing with time in this class of highly chaotic systems. The chaotic
Kflow property is widespread among systems where collisions between particles
dominate the dynamics, from those consisting of just three billiard balls
in a box (as shown by Sinai in his pioneering work of 1962) to gases containing
many particles considered as hard spheres. Many theorists believe, although
they have not yet proved it, that most systems found in everyday life are
also Kflows. My colleague, Oliver Penrose of HeriotWatt University and
I are trying to establish, by mathematically rigorous methods, whether we
can formulate exact kinetic equations for such systems in the way originally
proposed by Boltzmann.
In joint research with Maurice Courbage, also at Brussels,
Misra and Prigogine discovered a new definition of time for Kflows consistent
with irreversibility. This quantity, called the "internal time", represents
the age of a dynamical system. You can think of the age as reflecting a system's
irreversible thermodynamic aspects, while the description held in Newton's
equations for the same system portrays purely reversible dynamical features.
Thermodynamics and mechanics have been pitted against one another
for more than a century but now we have revealed a fascinating relationship.
Just as with the uncertainty principle in quantum mechanics, where knowing
the position of a particle accurately prevents us from knowing its momentum
and vice versa, we now find a new kind of uncertainty principle that applies
to chaotic dynamical systems. This new principle shows that complete certainty
of the thermodynamic properties of a system (through knowledge of its
irreversible age) renders the reversible dynamical description meaningless,
whilst complete certainty in the dynamical description similarly disables
the thermodynamic view.
Understanding dynamical chaos has helped to sharpen our
understanding of the concept of entropy. Entropy turns out to be a property
of unstable dynamical systems, for which the cherished notion of determinism
is overturned and replaced by probabilities and the game of chance. It seems
that reversibility and irreversibility are opposite sides of the same coin.
As physicists have already found through quantum mechanics, the full structure
of the world is richer than our language can express and our brains comprehend.
Many deep problems remain open for exploration, but at least we have made
a start.
Peter Coveney is a lecturer in physical chemistry
at the University of Wales, Bangor. His book, written with Roger Highfield,
The Arrow of Time, has just been published by W.H. Allen.
Second Law of Thermodynamics
Violated It seems that something odd happens to the second law of thermodynamics when systems get sufficiently small. The law states that the entropy, or disorder, of the universe increases over time and it holds steadfast for largescale systems. For instance, whereas a hot beverage will spontaneously dissipate heat to the surrounding air (an increase in disorder), the air cannot heat the liquid without added energy. Nearly a decade ago, scientists predicted that small assemblages of molecules inside larger systems may not always abide by the principle. Now Australian researchers writing in the July 29 issue of Physical Review Letters report that even larger systems of thousands of molecules can also undergo fleeting energy increases that seem to violate the venerable law. Genmiao M. Wang of the Australian National University and colleagues discovered the anomaly when they dragged a micronsized bead through a container of water using optical tweezers. The team found that, on occasion, the water molecules interacted with the bead in such a way that energy was transferred from the liquid to the bead. These additional kicks used the random thermal motion of the water to do the work of moving the bead, in effect yielding something for nothing. For periods of movement lasting less than two seconds, the bead was almost as likely to gain energy from the water as it was to add energy to the reservoir, the investigators say. No useful amounts of energy could be extracted from the setup, however, because the effect disappeared if the bead was moved for time intervals greater than two seconds. The findings suggest that the miniaturization of machines may have inherent limitations. Noting that nanomachines are not simply "rescaled versions of their larger counterparts," the researchers conclude that "as they become smaller, the probability that they will run in reverse inescapably becomes greater." Sarah Graham [Scientific American July 30, 2002]

Chaos  Quantum  Logic  Cosmos  Conscious  Belief  Elect.  Art  Chem.  Maths 