Crashing the Barriers

Does it really matter if there are some things that science will never solve? Ian Stewart thinks not

"We must know. We shall know." So said David Hilbert, one of the leading mathematicians at the turn of the century. Hilbert was gung ho about the future of mathematics. No-go areas should not exist, he believed, and he even had the outlines of a program to prove it. Yet within a few years, Hilbert's dream lay in ruins -a young logician named Kurt Gödel had proved that some mathematical questions simply don't have answers.

What Gödel showed in 1930 was that any logical system rich enough to model mathematics will always have insoluble problems. For instance , it is impossible to prove that mathematics contains no logical inconsistencies. Of course, you can deal with any particular insoluble problem by adding a new mathematical rule, but a new insoluble problem will always appear in the patched-up system.

Demise of science
Even at the time, it was a disturbing revelation. But today the implications could be downright shocking. A small, but increasingly vocal group of scientists is beginning to wonder whether Gödel 's mathematical limits might also exist in the real world. Could there be questions that science will never be able to answer no matter how accurately you know the conditions, no matter how big your computer, no matter how clever you are? And if so, could running up against them herald the demise of science?

At first sight the connection between Gödel's arcane mathematical examples and the real world is not self-evident. But although experimental science is about reality, theoretical science is about ideas, and most of those ideas depend crucially on mathematical proof. So limits to mathematics might well translate into limits to scientific theories.

Take the example of a toy train : you can't predict the path of a toy train on some model railway layouts because of another famous insoluble problem in mathematics, Alan Turing's Halting Problem, which he described several decades ago in the context of computational theory.

If you're using a computer for a task such as wordprocessing, you would expect the machine to do what it was asked and then stop, ready for the next task. However, programs can get "hung up" in infinite loops, doing the same thing over and over again.

Turing, One of the fathers of modern computing, asked whether it was possible to predict whether a program would eventually terminate or go on for ever. He devised a model for the computing process which he called a Turing machine consisting of a central processing unit to do all the calculations, a program, and as much memory as you need. Turing proved that within such a framework no mathematical theory can predict in advance whether a given computation will ever stop.

He did this by assuming that a program that could do the job existed, and then proving that this would lead to a logical inconsistency. His argument was roughly as follows. Call the imaginary predictor program A. Set A up so that you feed the test program into it, and it stops when it establishes that the test program never halts. Now feed A into itself. If A (test program) doesn't stop, A (predictor program) should stop and tell you that the test program doesn't stop.

But A can't both stop and run forever. The only way out is to assume that you can't predict which programs will halt in the first place. Turing's argument was actually a little more complicated, but that's the basic idea.

Enter the train set. In 1994, Adam Chalcraft and Michael Greene, then undergraduates at the University of Cambridge, discovered an interpretation of Turing machines in terms of a toy train wandering around a track. The Turing-machine layout has a depot from which the train starts, representing the start of a program, and a station, which represents the end of the computation. Each memory cell of a Turing machine can be represented by a "circuit" or network of track and points, and the contents of each memory cell depend on the states of the points within it.

You make as many sub-layouts as you need to have enough memory for the calculation. The layout is programmed by setting the points to particular states. The train is set off and it wanders through the layout, switching points as it passes through them. If it gets to the station, it stops ; the results of the computation can then be read off from the various states of the points.

Using this interpretation, Turing's theorem implies that no formal theory, when presented with a randomly chosen layout, can predict whether the train will eventually reach the station. If a test for "does the train halt?" existed, you could construct a layout that suffers from the same problem as the computer program A-the train reaches the station if, and only if, it doesn't. That's obviously nonsense, so no decision procedure for halting can exist.

Admittedly, this is a somewhat banal example -being unable to predict the long-term motion of a toy train doesn't sound like a particularly serious limitation. But it demonstrates that limits do exist to theoretical science. But what, then, about practical science?

John Barrow, an astronomer at the University of Sussex talks about several kinds of fundamental limits to practical science. One involves technological limits that are inherently intractable. For instance, a few years ago, theoretical physicist Rolf Landauer from IBM's research centre at Yorktown Heights in New York asked whether answering certain questions require more resources- in space, time or energy-than are available in the entire Universe. Consider, for example, the state of the Universe as a whole. In a classical (non-quantum) model, it is possible in theory to describe the evolution of the Universe by specifying its initial conditions-the precise state of every particle an instant after the big bang, say. Then the equations of physics will allow you to deduce all future states.

However, to specify the initial conditions you have to record a list of numbers for every constituent particle. Are there enough particles available to do this? It's a moot point. Even more problematic is the issue of what would happen if you disturbed the motion of a large proportion of the particles in the Universe in the mere act of recording those initial conditions : arguably, the future you predicted would be disturbed by how you set up the computation. It seems likely that trying to predict the behaviour of a system as complex as the Universe, while carrying out the prediction inside that system, is a self- defeating task.

Frontiers of knowledge
But is this a serious restriction to scientific endeavour? You could argue that determining the complete state of the Universe is the biggest, most ambitious scientific question you could ever ask. Yes, on the face of it, this does seem to provide a fundamental limit to our knowledge about the Universe. But you could see it more as a boundary: in a sense, it defines the edges of what we can know, while leaving plenty of space for scientific inquiry within the vast region defined by those edges.

Other examples of scientific limits involve genuine no-go areas : what you want to do sounds reasonable, and you can imagine doing it-it just happens not to be possible. Travelling faster than light, time travel (perhaps), or visiting a black hole and getting out again in one piece are just not on.

These "genuine" limits have to be carefully distinguished from what quantum physicist James Hartle calls "false limits". It is impossible to take a holiday in Atlantis: is this a genuine limitation of air travel? Hardly. In quantum mechanics, the Heisenberg uncertainty principle implies that it is impossible to measure both the position and momentum of a particle at the same time. This may look like a genuine no-go area, but it would be fairer to interpret the uncertainty principle as saying that a quantum particle does not possess a simultaneous position and momentum. It's no-go because, like Atlantis, it's not there.

Another way of thinking about this is Stephen Hawking's famous analogy about going north of the North Pole while staying on the surface of the Earth. Once again, it's no-go. But not because there's some physical limit stopping you getting there. It's just a meaningless thing to try to do. This is not a limit to scientific endeavour, so much as adopting the wrong mindset and asking the wrong question. You're asking about something that doesn't exist.

There are other investigations that may be hampered by this kind of "existential limit". Take the current hunt for a Theory of Everything that reveals the four known forces of nature (gravitational, electromagnetic, strong and weak) as aspects of a single unified force. Though such a theory could well exist, we can't assume that it does. The real world might not be like that. If so, physicists will never find a way to link the four forces no matter how hard they look. In their desire to pin down the nature of matter, they could be looking for the wrong thing.

But once again, this is not a restriction on science. Rather it's an indication that you need to think about the problem in a different way. In fact, coming up against this kind of limit can be useful, in that it can help you to realise that you are asking the wrong question, and to work out the right one. Take protein folding. A protein is a large molecule (between , say, a thousand and a million atoms) composed of units known as amino acids. Most proteins are there to manipulate other molecules in a very specific way-for example, haemoglobin captures or releases molecules of oxygen. But the action of the proteins depends very heavily on their exact shape -how the amino acid chain folds up in three dimensions.

Getting a protein to fold up is no great feat, any more than getting a piece of string to tangle. But a given chain of amino acids can, in principle, fold up in a vast number of different ways, and the problem is getting it to do it correctly.

Organic origami
For example, in humans, the protein cytochrome c has a chain of 104 amino acids, which is pretty short by protein standards; even so, the folded structure is distinctly complicated-and unless "biology" gets it exactly right, the organism won't work properly.

A protein containing a thousand amino acids can fold itself in about a second. Many physicists trying to model the process have worked on the assumption that biology does this by working out the configuration with the least energy. Unfortunately, it turns out to be incredibly difficult to compute minimal energy configurations, even for short molecules . One estimate quoted in the recent book Boundaries and Barriers, edited by John Casti of the Santa Fe Institute, is that for cytochrome c such a calculation would take 10127 years on a supercomputer.

Unlike Gödel -type limits, this one is not a limitation in principle, it is a limitation in practice. The difficulty here is that the number of potential configurations is vast, and the minimal-energy configuration lurks among them like a microscopic needle inside a haystack the size of a billion Universes. So how does this protein-or biology- perform, in a second, a 10127 year computation? Massive parallelism? That might get it down to 10100 years. Quantum superpositions of all possible folding patterns, automatically generating the minimal one? Unlikely.

What's probably happening is something that biologists have suspected for years. Biology has found a quick-and-dirty method that comes close to minimal energy-close enough to fool literal-minded scientists into thinking that is really what's going on. The trick may be not to start with a complete linear chain of amino acids and then fold it up-which is what these horrendous computations try to do. Instead, it folds the thing as it builds it, sequentially, and that must surely reduce the computational complexity. It also, one imagines, jiggles the part-formed protein around every so often to prevent odd protuberances getting hooked up on extraneous loops.

In fact, George Rose of Johns Hopkins University in Maryland, has written a new program called LINUS that employs heuristic rules (inspired scientific guesswork) to predict how really large proteins with a chain of 1000 amino acids will fold. It's a bit like playing chess by using general principles like "don't lose your queen". You play a reasonable game, but not always the best possible one -a grandmaster might well win by breaking the heuristic rules, say with a queen sacrifice.

LINUS works in a similar way. Instead of looking for minimal energy, it works on principles such as "avoid shapes with energies that look too big", and it does pretty well. Joseph Traub of Columbia University in New York, has studied the relation between simulations of protein-folding on computers and protein-folding as it really happens -his conclusion being that the limitations of one need not carry over to the other.

Take it to the limit
Even if this is all wrong-which wouldn't be a bit surprising-this shows how running up against a limit can be useful in science. Knowing that it's virtually impossible for biology to calculate the minimum-energy configuration in the short time available tells you that something else must be going on. Either living cells have some mysterious capacity for superfast calculations Or they are folding their proteins some other way. If it really is some other way, this in turn tells you that even if you performed your 10100- year calculation, you could end up with the right answer for the lowest-energy configuration, but the wrong answer for the protein shape that the cell actually produces-which is what you're really after.

In a way, this illustrates why we should not be afraid of scientific limits. One of the strangest consequences of Gödel 's theorem is that it had very little effect on the practice, or the growth, of mathematics. The main reason is that there are plenty of problems left that aren't insoluble. And anyway, extending Gödel's methods shows that there is no way to decide in advance whether your particular problem has a solution or not. So his theorem doesn't affect what you do: it just opens your eyes to the possibility that you might never succeed.

There is more to it than that, of course. Knowing one's limits is the essence of wisdom. Gödel's dramatic discovery spelt not the end of mathematics, but its maturity, and the same goes for science. Limits are unlikely to kill it off. Instead, they define the boundaries of what we can study, and can help our understanding within those boundaries. Barrow also sees limitations as a positive feature: "As we probe deeper into the intertwined logical structures that underwrite the nature of reality, we can expect to find more of these deep results which limit what can be known. Ultimately, we may even find that their totality characterises the Universe more precisely than the catalogue of those things that we can know." The biggest limitation of science may turn out to be an inability to determine its own limitations. After all, if science really were omnipotent, it would be able to invent a theory so hard that scientific method couldn't come to grips with.


Further reading
Boundaries and Barriers, a collection of papers on limits to scientific knowledge edited by John Casti and Anders Karlqvist (Addison-Wesley, 1996)

MAIN INDEX

REFERENCE GUIDE

TRANSCRIPTS

GLOSSARY

Chaos Quantum Logic Cosmos Conscious Belief Elect. Art Chem. Maths


New Scientist 29 Mar 1997 File Info: Created 13/7/2000 Updated 13/1/2014 Page Address: http://leebor2.100webspace.net/Zymic/crashbar.html