Animated Attractor

A random walk in arithmetic

God not only plays dice in physics but also in pure mathematics. Mathematical truth is sometimes nothing more than a perfect coin toss

G. J. Chaitin

The notion of randomness obsesses physicists today. To what extent can we predict the future? Does it depend on our own limitations? Or is it in principle impossible to predict the future? The question of predictability has a long history in physics. In the early 19th century, the classical deterministic laws of Isaac Newton led Pierre Simon de Laplace to believe that the future of the Universe could be determined forever.

Then quantum mechanics came along. This is the theory that is fundamental to our understanding of the nature of matter. It describes very small objects, such as electrons and other fundamental particles. One of the controversial features of quantum mechanics was that it introduced probability and randomness at a fundamental level to physics. This greatly upset the great physicist Albert Einstein, who said that God did not play dice.

Then surprisingly, the modern study of nonlinear dynamics showed us that even the classical physics of Newton had randomness and unpredictability at its core. The theory of chaos, as the series of articles in New Scientist last year described, has revealed how the notion of randomness and unpredictability is beginning to look like a unifying principle.

It seems that the same principle even extends to mathematics. I can show that there are theorems connected with number theory that cannot be proved because when we ask the appropriate questions, we obtain results that are equivalent to the random toss of a coin.

My results would have shocked many 19th-century mathematicians, who believed that mathematical truths could always be proved. For example, in 1900, the mathematician, David Hilbert, gave a famous lecture in which he proposed a list of 23 problems as a challenge to the new century. His sixth problem had to do with establishing the fundamental universal truths, or axioms, of physics. One of the points in this question concerned probability theory. To Hilbert, probability was simply a practical tool that came from physics; it helped to describe the real world when there was only a limited amount of information available.

Another question he discussed was his tenth problem, which was connected with solving so-called 'diophantine' equations, named after the Greek mathematician Diophantus. These are algebraic equations involving only whole numbers, or integers. Hilbert asked: 'Is there a way of deciding whether or not an algebraic equation has a solution in whole numbers?'

Little did Hilbert imagine that these two questions are subtly related. This was because Hilbert had assumed something that was so basic to his thinking that he did not even formulate it as a question in his talk. That was the idea that every mathematical problem has a solution. We may not be bright enough or we may not have worked long enough on the problem but, in principle, it should be possible to solve it - or so Hilbert thought. For him, it was a black or white situation.

It seems now that Hilbert was on shaky ground. In fact, there is a connection between Hilbert's sixth question dealing with probability theory and his tenth problem of solving algebraic equations in whole numbers that leads to a surprising and rather chilling result. That is: randomness lurks at the heart of that most traditional branch of pure mathematics, number theory.

Clear, simple mathematical questions do not always have clear answers. In elementary number theory, questions involving diophantine equations can give answers that are completely random and look grey, rather than black or white. The answer is random because the only way to prove it is to postulate each answer as an additional independent axiom. Einstein would be horrified to discover that not only does God play dice in quantum and classical physics but also in pure mathematics.

Where does this surprising conclusion come from? We have to go back to Hilbert. He said that when you set up a formal system of axioms there should be a mechanical procedure to decide whether a mathematical proof is correct or not, and the axioms should be consistent and complete. If the system of axioms is consistent, it means that you cannot prove both a result and its contrary. If the system is complete, then you can also prove any assertion to be true or false. It follows that a mechanical procedure would ensure that all mathematical assertions can be decided mechanically.

There is a colourful way to explain how this mechanical procedure works: the so-called 'British Museum algorithm'. What you do - it cannot be done in practice because it would take forever - is to use the axiom system, set in the formal language of mathematics, to run through all possible proofs, in order of their size and lexicographic order. You check which proofs are correct - which ones follow the rules and are accepted as valid. In principle, if the set of axioms is consistent and complete, you can decide whether any theorem is true or false. Such a procedure means that a mathematician no longer needs ingenuity or inspiration to prove theorems. Mathematics becomes mechanical.

Why mathematics is not complete
Of course, mathematics is not like that. Kurt Godel, the Austrian logician, and Alan Turing, the father of the computer, showed that it is impossible to obtain both a consistent and complete axiomatic theory of mathematics and a mechanical procedure for deciding whether an arbitrary mathematical assertion is true or false, or is provable or not.

Godel was the first to devise the ingenious proof, couched in number theory, of what is called the incompleteness theorem (see 'The incompleteness of arithmetic', New Scientist, 5 November 1987). But I think that the Turing version of the theorem is more fundamental and easier to understand. Turing used the language of the computer - the instructions, or program, that a computer needs to work out problems. He showed that there is no mechanical procedure for deciding whether an arbitrary program will ever finish its computation and halt.

To show that the so-called halting problem can never be solved, we set the program running on a Turing machine, which is a mathematical idealisation of a digital computer with no time limit. (The program must be self-contained with all its data wrapped up inside the program.) Then we simply ask: 'Will the program go on forever, or at some point will it say 'I'm finished' and halt?'

Turing showed that there is no set of instructions that you can give the computer, no algorithm, that will decide if a program will ever halt. Godel's incompleteness theorem follows immediately because if there is no mechanical procedure for deciding the halting problem, then there is no complete set of underlying axioms either. If there were, they would provide a mechanical procedure for running through all possible proofs to show whether programs halt - although it would take a long time, of course.

To obtain my result about randomness in mathematics, I simply take Turing's result and just change the wording. What I get is a sort of a mathematical pun. Although the halting problem is unsolvable, we can look at the probability of whether a randomly chosen program will halt. We start with a thought experiment using a general purpose computer that, given enough time, can do the work of any computer - the universal Turing machine.

Instead of asking whether or not a specific program halts, we look at the ensemble of all possible computer programs. We assign to each computer program a probability that it will be chosen. Each bit of information in the random program is chosen by tossing a coin, an independent toss for each bit, so that a program containing so many bits of information, say, N bits, will have a probability of 2-N. We can now ask what is the total probability that those programs will halt. This halting probability, call it V, wraps up Turing's question of whether a program halts into one number between 0 and 1. If the program never halts, V is 0; if it always halts, V is 1.

In the same way that computers express numbers in binary notation, we can describe V in terms of a string of 1s and 0s. Can we determine whether the Nth bit in the string is a 0 or a 1? In other words, can we compute V? Not at all. In fact, I can show that the sequence of 0s and 1s is random using what is called algorithmic information theory. This theory ascribes a degree of order in a set of information or data according to whether there is an algorithm that will compress the data into a briefer form.

For example, a regular string of 1s and 0s describing some data such as 0101010101 . . . which continues for 1000 digits can be encapsulated in a shorter instruction 'repeat 01 500 times'. A completely random string of digits cannot be reduced to a shorter program at all. It is said to be algorithmically incompressible.

My analysis shows that the halting probability is algorithmically random. It cannot be compressed into a shorter program. To get N bits of the number out of a computer, you need to put in a program at least N bits long. Each of the N bits of V is an irreducible independent mathematical fact, as random as tossing a coin. For example, there are as many 0s in V as 1s. And knowing all the even bits does not help us to know any of the odd bits.

My result that the halting probability is random corresponds to Turing's assertion that the halting problem is undecidable. It has turned out to provide a good way to give an example of randomness in number theory, the bedrock of mathematics. The key was a dramatic development about five years ago. James Jones of the University of Calgary in Canada and Yuri Matijasevic of the Steklov Institute of Mathematics in Leningrad discovered a theorem proved by Edouard Lucas in France a century ago. The theorem provides a particularly natural way to translate a universal Turing machine into a universal diophantine equation that is equivalent to a general purpose computer.

I thought it would be fun to write it down. So with the help of a large computer I wrote down a universal-Turing-machine equation. It had 17 000 variables and went on for 200 pages. The equation is of a type that is referred to as 'exponential diophantine'. All the variables and constants in it are non-negative integers, 0, 1, 2, 3, 4, 5, and so on. It is called 'exponential' because it contains numbers raised to an integer power. In normal diophantine equations the power has to be a constant. In this equation, the power can be a variable. So in addition to having X3, it also contains XY.

To convert the assertion that the halting probability V is random into an assertion about the randomness of solutions in arithmetic, I need only to make a few minor changes in this 200-page universal-Turing-machine diophantine equation. The result, my equation exhibiting randomness, is also 200 pages long. The equation has a single parameter, the variable N. For any particular value of this parameter, I ask the question: 'Does my equation have a finite or infinite number of whole-number solutions?' Answering this question turns out to be equivalent to calculating the halting probability. The answer 'encodes' in arithmetical language whether the Nth bit of V is a 0 or a 1. If the Nth bit of V is a 0, then my equation for that particular value of N has a finite number of solutions. If the Nth bit of the halting probability V is a 1, then this equation for that value of the parameter N has an infinite number of solutions. Just as the Nth bit of V is random - an independent, irreducible fact like tossing a coin - so is deciding whether the number of solutions of my equation is finite or infinite. We can never know.

To find out whether the number of solutions is finite or infinite in particular cases, say, for k values of the parameter N, we would have to postulate the k answers as k additional independent axioms. We would have to put in k bits of information into our system of axioms, so we would be no further forward. This is another way of saying that the k bits of information are irreducible mathematical facts.

I have found an extreme form of randomness, of irreducibility, in pure mathematics - in a part of elementary number theory associated with the name of Diophantus and which goes back 2000 years to classical Greek mathematics. Hilbert believed that mathematical truth was black or white, that something was either true or false. I think that my work makes things look grey, and that mathematicians are joining the company of their theoretical physics colleagues. I do not think that this is necessarily bad. We have seen that in classical and quantum physics, randomness and unpredictability are fundamental. I believe that these concepts are also found at the very heart of pure mathematics.


The Author

Gregory Chaitin is a member of the theoretical physics group at the Thomas J. Watson Research Center in Yorktown Heights, New York, which is part of IBM's research division.

Further Reading


G. J. Chaitin, Information, Randomness & Incompleteness, Second Edition, World Scientific, Singapore, 1990;
G. J. Chaitin, Algorithmic Information Theory, third printing, Cambridge University Press, Cambridge, 1990.

MAIN INDEX

REFERENCE GUIDE

TRANSCRIPTS

MATHS-GLOSSARY

Chaos Quantum Logic Cosmos Conscious Belief Elect. Art Chem. Maths


New Scientist 24 March 1990 File Info: Created Updated 5/7/2020 Page Address: http://leebor2.100webspace.net/Zymic/randwalk.html