Random Reality

Space and the material world could be created out of nothing but noise. That's the startling conclusion of a new theory that attempts to explain the stuff of reality, as Marcus Chown reports

If you could lift a corner of the veil that shrouds reality, what would you see beneath? Nothing but randomness, say two Australian physicists. According to Reginald Cahill and Christopher Klinger of Flinders University in Adelaide, space and time and all the objects around us are no more than the froth on a deep sea of randomness.

Perhaps we shouldn't be surprised that randomness is a part of the Universe. After all, physicists tell us that empty space is a swirling chaos of virtual particles. And randomness comes into play in quantum theory--when a particle such as an electron is observed, its properties are randomly selected from a set of alternatives predicted by the equations.

But Cahill and Klinger believe that this hints at a much deeper randomness. "Far from being merely associated with quantum measurements, this randomness is at the very heart of reality," says Cahill. If they are right, they have created the most fundamental of all physical theories, and its implications are staggering. "Randomness generates everything," says Cahill. "It even creates the sensation of the 'present', which is so conspicuously absent from today's physics."

Their evidence comes from a surprising quarter--pure mathematics. In 1930, the Austrian-born logician Kurt Gödel stunned the mathematical world with the publication of his incompleteness theorem. It applied to formal systems--sets of assumptions and the statements that can be deduced from those assumptions by the rules of logic. For example, the Greeks developed their geometry using a few axioms, such as the idea that there is only one straight line through any pair of points. It seemed that a clever enough mathematician could prove any theorem true or false by reasoning from axioms.

But Gödel proved that, for most sets of axioms, there are true theorems that cannot be deduced. In other words, most mathematical truths can never be proved. This bombshell could easily have sent shock waves far beyond mathematics. Physics, after all, is couched in the language of maths, so Gödel's theorem might seem to imply that it is impossible to write down a complete mathematical description of the Universe from which all physical truths can be deduced. Physicists have largely ignored Gödel's result, however. "The main reason was that the result was so abstract it did not appear to connect directly with physics," says Cahill.

But then, in the 1980s, Gregory Chaitin of IBM's Thomas J. Watson Research Center in Yorktown Heights, New York, extended Gödel's work, and made a suggestive analogy. He called Gödel's unprovable truths random truths. What does that mean? Mathematicians define a random number as one that is incompressible. In other words, it cannot be generated by an algorithm--a set of instructions or rules such as a computer program--that is shorter than the number. Chaitin defined random truths as ones that cannot be derived from the axioms of a given formal system. A random truth has no explanation, it just is.

Chaitin showed that a vast ocean of such truths surrounds the island of provable theorems. Any one of them might be stumbled on by accident--an equation might be accidentally discovered to have some property that cannot be derived from the axioms--but none of them can be proved. The chilling conclusion, wrote Chaitin in New Scientist, is that randomness is at the very heart of pure mathematics (A Random Walk).

To prove his theorem, Gödel had concocted a statement that asserted that it was not itself provable. So Gödel's and Chaitin's results apply to any formal system that is powerful enough to make statements about itself.

"This is where physics comes in," says Cahill. "The Universe is rich enough to be self-referencing--for instance, I'm aware of myself." This suggests that most of the everyday truths of physical reality, like most mathematical truths, have no explanation. According to Cahill and Klinger, that must be because reality is based on randomness. They believe randomness is more fundamental than physical objects.

At the core of conventional physics is the idea that there are "objects"--things that are real, even if they don't interact with other things. Before writing down equations to describe how electrons, magnetic fields, space and so on work, physicists start by assuming that such things exist. It would be far more satisfying to do away with this layer of assumption.

This was recognised in the 17th century by the German mathematician Gottfried Leibniz. Leibniz believed that reality was built from things he called monads, which owed their existence solely to their relations with each other. This picture languished in the backwaters of science because it was hugely difficult to turn into a recipe for calculating things, unlike Newton's mechanics.

But Cahill and Klinger have found a way to do it. Like Leibniz's monads, their "pseudo-objects" have no intrinsic existence--they are defined only by how strongly they connect with each other, and ultimately they disappear from the model. They are mere scaffolding.

The recipe is simple: take some pseudo-objects, add a little randomness and let the whole mix evolve inside a computer. With pseudo-objects numbered 1, 2, 3, and so on, you can define some numbers to represent the strength of the connection between each pair of pseudo-objects: B12 is the strength of the connection between 1 and 2; B13 the connection between 1 and 3; and so on. They form a two-dimensional grid of numbers--a matrix.

The physicists start by filling their matrix with numbers that are very close to zero. Then they run it repeatedly through a matrix equation which adds random noise and a second, non-linear term involving the inverse of the original matrix. The randomness means that most truths or predictions of this model have no cause--the physical version of Chaitin's mathematical result. This matrix equation is largely the child of educated guesswork, but there are good precedents for that. In 1932, for example, Paul Dirac guessed at a matrix equation for how electrons behave, and ended up predicting the existence of antimatter.

What is antimatter?

R. Bingham,
Lakewood, Colorado

Helen Quinn
R. Michael Barnett of the Lawrence Berkeley National Laboratory and Helen Quinn (left) of the Stanford Linear Accelerator Center offer this answer, portions of which are paraphrased from their forthcoming book The Charm of Strange Quarks:

In 1930 Paul Dirac formulated a quantum theory for the motion of electrons in electric and magnetic fields, the first theory that correctly included Einstein's theory of special relativity in this context. This theory led to a surprising prediction--the equations that described the electron also described, and in fact required, the existence of another type of particle with exactly the same mass as the electron, but with positive instead of negative electric charge. This particle, which is called the positron, is the antiparticle of the electron, and it was the first example of antimatter.
Its discovery in experiments soon confirmed the remarkable prediction of antimatter in Dirac's theory. A cloud chamber picture taken by Carl D. Anderson in 1931 showed a particle entering from below and passing through a lead plate. The direction of the curvature of the path, caused by a magnetic field, indicated that the particle was a positively charged one, but with the same mass and other characteristics as an electron. Experiments today routinely produce large numbers of positrons.

Tracks
Image: CERN

PROTON-ANTIPROTON HIT. This computer reconstruction of the particle tracks emerging after a proton and antiproton collision shows a Z-particle, decayed into an electron and positron (yellow tracks).


Dirac's prediction applies not only to the electron but to all the fundamental constituents of matter (particles). Each type of particle must have a corresponding antiparticle type. The mass of any antiparticle is identical to that of the particle. All the rest of its properties are also closely related, but with the signs of all charges reversed. For example, a proton has a positive electric charge, but an antiproton has a negative electric charge. The existence of antimatter partners for all matter particles is now a well-verified phenomenon, with both partners for hundreds of such pairings observed.
New discoveries lead to new language. In coining the term "antimatter," physicists in fact redefined the meaning of the word "matter." Until that time, "matter" meant anything with substance; even today school textbooks give this definition: "matter takes up space and has mass." By adding the concept of antimatter as distinct from matter, physicists narrowed the definition of matter to apply to only certain kinds of particles, including, however, all those found in everyday experience.
Any pair of matching particle and antiparticle can be produced anytime there is sufficient energy available to provide the necessary mass-energy. Similarly, any time a particle meets its matching antiparticle, the two can annihilate one another--that is, they both disappear, leaving their energy transformed into some other form.
There is no intrinsic difference between particles and antiparticles; they appear on essentially the same footing in all particle theories. This means that the laws of physics for antiparticles are almost identical to those for antiparticles; any difference is a tiny effect. But there certainly is a dramatic difference in the numbers of these objects we find in the world around us; all the world is made of matter. Any antimatter we produce in the laboratory soon disappears because it meets up with matching matter particles and annihilates.
Modern theories of particle physics and of the evolution of the universe suggest, or even require, that antimatter and matter were equally common in the earliest stages--so why is antimatter so uncommon today? The observed imbalance between matter and antimatter is a puzzle yet to be explained. Without it, the universe today would certainly be a much less interesting place, because there would be essentially no matter left around; annihilations would have converted everything into electromagnetic radiation by now. So clearly this imbalance is a key property of the world we know. Attempts to explain it are an active area of research today.
In order to answer this question, we need to better understand that tiny part of the laws of physics that differ for matter and antimatter; without such a difference, there would be no way for an imbalance to occur. This distinction is the subject of study in a number of experiments around the world that focus on differences in the decays of particles called B-mesons and their antiparticle partners. These experiments will be done both at electron-positron collider facilities called B factories and at high-energy hadron colliders, because each type of facility offers different capabilities to contribute to the study of this detail of the laws of physics--a detail that is responsible for such an important property of the universe as the fact that there is anything there at all!

Answer posted October 18, 1999 Scientific American

RELATED LINKS:

Antimatter page from the Particle Adventure
A Smattering of Antimatter from Scientific American, April 1996
Antimatter atoms produced at CERN
Antimatter clouds in space discovered by NASA's Compton Gamma Ray Observatory
Hunting the Quark Horizon programme.

When the matrix goes through the wringer again and again, most of the elements remain close to zero, but some numbers suddenly become large. "Structures start forming," says Cahill. This is no coincidence, as they chose the second term in the equation because they knew it would lead to something like this. After all, there is structure in the Universe that has to be explained.

The structures can be seen by marking dots on a piece of paper to represent the pseudo-objects 1, 2, 3, and so on. It doesn't matter how they are arranged. If B23 is large, draw a line between 2 and 3; if B19 is large, draw one between 1 and 9. What results are "trees" of strong connections, and a lot of much weaker links. And as you keep running the equation, smaller trees start to connect to others. The network grows.

The trees branch randomly, but Cahill and Klinger have found that they have a remarkable property. If you take one pseudo-object and count its nearest neighbours in the tree, second nearest neighbours, and so on, the numbers go up in proportion to the square of the number of steps away (click on thumbnail graphic below). This is exactly what you would get for points arranged uniformly throughout three-dimensional space. So something like our space assembles itself out of complete randomness. "It's downright creepy," says Cahill. Cahill and Klinger call the trees "gebits", because they act like bits of geometry.

Tree roots: pseudo - objects link up into random trees, which link into ever larger structures. The hierarchy of neighbours is just like that of points in 3D space.

They haven't proved that this tangle of connections is like 3D space in every respect, but as they look closer at their model, other similarities with our Universe appear. The connections between pseudo-objects decay, but they are created faster than they decay. Eventually, the number of gebits increases exponentially. So space, in Cahill and Klinger's model, expands and accelerates--just as it does in our Universe, according to observations of the recession of distant supernovae. In other words, Cahill and Klinger think their model might explain the mysterious cosmic repulsion that is speeding up the Universe's expansion.

And this expanding space isn't empty. Topological defects turn up in the forest of connections--pairs of gebits that are far apart by most routes, but have other shorter links. They are like snags in the fabric of space. Cahill and Klinger believe that these defects are the stuff we are made of, as described by the wave functions of quantum theory, because they have a special property shared by quantum entities: nonlocality. In quantum theory, the properties of two particles can be correlated, or "entangled", even when they are so far apart that no signal can pass between them. "This ghostly long-range connectivity is apparently outside of space," says Cahill. But in Cahill and Klinger's model of reality, there are some connections that act like wormholes to connect far-flung topological defects.

Even the mysterious phenomenon of quantum measurement can be seen in the model. In observing a quantum system any detector ought to become entangled with the system in a joint quantum state. We would see weird quantum superpositions like Schrödinger's alive-and-dead cat. But we don't.

How does the quantum state "collapse" to a simple classical one? In Cahill and Klinger's model, the nonlocal entanglements disappear after many iterations of the matrix equation. That is, ordinary 3D space reasserts itself after some time, and the ghostly connection between measuring device and system is severed.

This model could also explain our individual experience of a present moment. According to Einstein's theory of relativity, all of space-time is laid out like a four-dimensional map, with no special "present" picked out for us to feel. "Einstein thought an explanation of the present was beyond theoretical physics," says Cahill. But in the gebit picture, the future is not predetermined. You never know what it will bring, because it is dependent on randomness. "The present is therefore real and distinct from an imagined future and a recorded past," says Cahill.


Sand castles
But why can't we detect this random dance of the pseudo-objects? "Somehow, in the process of generating reality, the pseudo-objects must become hidden from view," says Cahill. To simulate this, the two physicists exploited a phenomenon called self-organised criticality.

Self-organised criticality occurs in a wide range of systems such as growing sand piles. Quite spontaneously, these systems reach a critical state. If you drop sand grains one by one onto a sand pile, for instance, they build up and up into a cone until avalanches start to happen. The slope of the side of the cone settles down to a critical value, at which it undergoes small avalanches and big avalanches and all avalanches at all scales in between. This behaviour is independent of the size and shape of the sand grains, and in general it is impossible to deduce anything about the building blocks of a self-organised critical system from its behaviour. In other words, the scale and timing of avalanches doesn't depend on the size or shape of the sand grains.

"This is exactly what we need," says Cahill. "If our system self-organises to a state of criticality, we can construct reality from pseudo-objects and simultaneously hide them from view." The dimensionality of space doesn't depend on the properties of the pseudo-objects and their connections. All we can measure is what emerges, and even though gebits are continually being created and destroyed, what emerges is smooth 3D space. Creating reality in this way is like pulling yourself up by your bootstraps, throwing away the bootstraps and still managing to stay suspended in mid-air.

This overcomes a problem with the conventional picture of reality. Even if we discover the laws of physics, we are still left with the question: where do they come from? And where do the laws that explain where they come from come from? Unless there is a level of laws that explain themselves, or turn out to be the only mathematically consistent set--as Steven Weinberg of the University of Texas at Austin believes--we are left with an infinite regression. "But it ceases to be a problem if self-organised criticality hides the lowest layer of reality," says Cahill. "The start-up pseudo-objects can be viewed as nothing more than a bundle of weakly linked pseudo-objects, and so on ad infinitum. But no experiment will be able to probe this structure, so we have covered our tracks completely."

Other physicists are impressed by Cahill and Klinger's claims. "I have never heard of anyone working on such a fundamental level as this," says Roy Frieden of the University of Arizona in Tucson. "I agree with the basic premise that 'everything' is ultimately random, but am still sceptical of the details." He would like to see more emerge from the model before committing himself. "It would be much more convincing if Cahill and Klinger could show something physical--that is, some physical law--emerging from this," says Frieden. "For example, if this is to be a model of space, I would expect something like Einstein's field equation for local space curvatures emerging. Now that would be something."

"It sounds rather far-out," says John Baez of the University of California at Riverside. "I would be amazed--though pleased--if they could actually do what you say they claim to."

"I've seen several physics papers like this that try to get space-time or even the laws of physics to emerge from random structures at a lower level," says Chaitin. "They're interesting efforts, and show how deeply ingrained the statistical point of view is in physics, but they are difficult, path-breaking and highly tentative efforts far removed from the mainstream of contemporary physics."

What next? Cahill and Klinger hope to find that everything--matter and the laws of physics--emerges spontaneously from the interlinking of gebits. Then we would know for sure that reality is based on randomness. It's a remarkable ambition, but they have already come a long way. They have created a picture of reality without objects and shown that it can emerge solely out of the connections of pseudo-objects. They have shown that space can arise out of randomness. And, what's more, a kind of space that allows both ordinary geometry and the non-locality of quantum phenomena--two aspects of reality which, until now, have appeared incompatible.

Perhaps what is most impressive, though, is that Cahill and Klinger are the first to create a picture of reality that takes into account the fundamental limitations of logic discovered by Gödel and Chaitin. In the words of Cahill: "It is the logic of the limitations of logic that is ultimately responsible for generating this new physics, which appears to be predicting something very much like our reality."

How the universe came from nothing



MAIN INDEX

REFERENCE GUIDE

TRANSCRIPTS

GLOSSARY

Chaos Quantum Logic Cosmos Conscious Belief Elect. Art Chem. Maths


New Scientist 26 February 2000 File Info: Created --/--/-- Updated 17/12/2017   Page Address: http://leebor2.100webspace.net/Zymic/randreal.html