 ## An Experiment with Mathematics

Think of a number x, put it into a simple equation and feed the equation to a computer. Put the answer back into the equation. Repeat the exercise and watch chaos evolve before your eyes

Franco Vivaldi

If something is too large, make it smaller, if it is too small make it larger. This is not a quotation from Mao's Little Red Book, but rather a simple recipe for constructing a dynamical process called feedback. It is a constant presence in our lives, and in many practical situations, it is important to predict how large or small a variable quantity associated with feedback will eventually become. Common sense would suggest that it will settle down somewhere in the middle, where it is neither too large nor too small. But this answer looks suspiciously simple, and in fact it can be terribly wrong.I want to show here how feedback may turn into chaos.

The classic example comes from population dynamics, where feedback prevents populations of plants or animals from growing indefinitely. For instance, imagine that our feedback variable is the number of fishes in an ideal lake, free from pollution and fishermen. If there are few fishes, they will thrive in the favourable environment and reproduce rapidly, and the fish population will increase. But if there are too many, they will compete for food and suffer from their own pollution (an ideal lake, remember), and their number will decrease.

We can formulate this problem in an abstract setting using simple mathematics. Let us call the variable xt , where the subscript t, a whole number, stands for the time, with the stipulation that successive times correspond to successive measurements of the variable x. Thus t does not necessarily represent the physical time, but it is rather a convenient label that orders a succession of events. In the population problem, t would label every new generation.

Assuming that t = 0 corresponds to the present, that is, x0, is the value measured at the beginning of the observations, then predicting the future means computing xt . The larger the value of t, the more remote the future that we are probing. We can even let t approach infinity, a privilege denied in real forecasting.

Dynamics comes into play when we specify a rule for transforming x, which sets the picture in motion. I assume that the result of the measurement at time t + 1 is unambiguously determined by that at time t. We can write this relation in a mathematical form :

xt + 1 = f(xt)

The letter f denotes a "function" , which is the way mathematicians indicate the precise relation between two quantities - xt + l depends on xt , and only xt .
The detailed structure of the function f is of no concern at the moment, but assuming that the link between the current and the successive values of x is described by a function is no small deal. In one stroke, I have removed any ambiguity in the determination of xt ; chance, unknown external factors, noise, are not allowed to play any role here-this process is deterministic. In other words, the future of deterministic systems follows from the present, without uncertainties. "It is not like playing roulette", it would be tempting to say. But do not say it, because we will be gambling later on, using a simple deterministic feedback system.

All information about the system is stored within the function f. Consider now a specific process, given by the following function f: (equation 1) which, if you remember from school mathematics, represents a parabola. The function f depends on a certain parameter l, which we have introduced so as to incorporate in a single description of the behaviour of a whole family of feedback systems. This parameter quantifies the strength of the feedback, that is , the amount by which the feedback is correcting the value of the variable x. Geometrically, by varying l, we vary the shape of f. Figure1 A graph of Equation1 for different values of l

It is useful to see what f looks like for various values of l. To do this, we can plot xt against xt+1 on a plane (see Figure 1). All points lying above the diagonal given by xt+1 = xt have the property that xt+1 is greater than xt , while for those below it we have xt+1 less than xt. The graph of f will, therefore, lie above the diagonal for small x and below it for large x. Then it must necessarily cross that line, at least once. At the crossing point, we have xt+1 = xt, that is xt retains the same value in two successive measurements.

Equation 1 is a famous model, called the logistic map, originally proposed in the context of population dynamics. The variable x is meant to be restricted between 0 and 1, only apparently a limitation because there is no harm in assuming that x has been suitably scaled so as to lie between these limits. Does this simple-looking problem have an exact solution ? Can we derive a formula expressing xt explicitly as a function of x0, and l? This would be the formula for predicting the future. The answer is yes. There is a straightforward, if naive, way of obtaining such a solution, as I shall indicate, but the result will prove to be disappointing. If you find formulas hostile and incomprehensible, bear with me for the next two paragraphs, and your worst fears will be confirmed.

I begin by letting t=0 in equation 1. In this way,I obtain the value of x1 as a function of x0, and l, that is:

x1 = x0 (l - x0)

Now I can take equation 1 again, but with t = 1. This gives me x2 = lx1( l - xl ), and I can substitute the equation for x1,already obtained, to get the equation:

x2 = l (lx0 (l - x0) ( 1 - lx0 (l - x0)))

One more time. From x3 = lx2 ( l - x2 ), and using the expression for x2 just found, 1 obtain x3 as a function of x0 and l, that is, the future three steps ahead, as a function of the present, at the chosen value of the parameter:

x3 = l(l(lx0 (l - x0)(1 - lx0(l - x0)))(1 - l(lx0 (l - x0)(1 - lx0 (l - x0)))))

You could scarcely get excited about this accomplishment. These formulas rapidly become long and inscrutable ; the expression for x3  is already unfriendly, I could not fit it on a single line, x15 will fill a book and x30  the British Library. This is because the formula for xt contains physically the formula for xt-1, which in turn contains that for xt-2, and so on. The formal solution to our problem is clearly useless for predicting anything but the very near future, and should make you ponder over the meaning of the word "solution", an attribute these formulas do not deserve. We shall see that these equations cannot be simplified plainly because what they describe is not simple. There is chaos in the system, this problem has no solution.

What is actually happening to xt as t increases? We ought to perform an experiment. After all, this is what people do in the physical sciences when theory fails. Rather than searching for a result valid for all x0 and l(the sequence of functions above) , I shall choose specific values of x0 and l, from them compute: then:  and so forth, up to xt . Manipulating numbers instead of functions makes this process much more economical. At each step, all we have to retain of the past is the previous value of x, which is one number, rather than the entire past history of all possible processes.

The outcome of this mathematical experiment will be a sequence of measurements x0, x1,x2....xt, just like in real experiments. We will not have a "formula" to pride ourselves with, but we will gain precious information by means of iterative calculations.

Computers love iteration - repeating the same task over and over again. In our case, each individual task, the computation of f(x) , is actually very simple, two multiplications and one subtraction in all. With each arithmetical operation taking a tiny fraction of a second, the prospect of computing x1000000 becomes real. It can be done overnight on your personal computer , and faster than a blink on a Cray supercomputer. Moreover, in numerical experiments, the precision with which the ingredients and the results can be measured is limited only by the size and power of the computer.

To unveil the presence of chaos, 1 will choose a specific numerical experiment. Let x0 = 0.4, and compute xt for t = 1, . ., 15, and for a few values of l. Those who are familiar with a programming language will find that the bulk of the program consists of a simple iterative loop,specified by a sequence of statements such as (here in Basic):

 FOR I=1 TO 15 X=LAMB*X*(l - X) PRINT X NEXT

Not a very intimidating program. The results may look like Figure 1.
For
l = 2, the experiment brings comforting news. We witness the predicted relaxation of x, growing rapidly to a value that is neither too small, nor too large, x = 0.5. This is the value at which the function f2(0.5) = 0.5 (see Figure 1), that is why once x reaches 0.5, it remains there. The outcome of the second experiment is more puzzling. The value of l of 1 + Ö5 = 3.236 . . . was not a random choice.

The sequence of numbers appears to approach a final regime where two distinct values of x are alternating. Had we thought about the feedback process more carefully we could have predicted this behaviour. At this value of the parameter the feedback is strong enough to produce an overcorrection-a value of x that is too small is followed by one that is too large, and vice versa. This causes x to relax to a configuration where the opposite overcorrections balance each other precisely and we get regular oscillations between x = 0.50000 and x = 0.80902.

 t l= 2 l = 1 + Ö5 l = 4 t l= 2 l= 1 + Ö5 l= 4 0 0.40000 0.40000 0.40000 0 0.35000 0.35000 0.40001 1 0.48000 0.77666 0.96000 1 0.45500 0.73621 0.96001 2 0.49920 0.56133 0.15360 2 0.49595 0.62847 0.15357 3 0.50000 0.79684 0.52003 3 0.49997 0.75561 0.51995 4 0.50000 0.52387 0.99840 4 0.50000 0.59758 0.99841 5 0.50000 0.80717 0.00641 5 0.50000 0.77820 0.00636 6 0.50000 0.50368 0.02547 6 0.50000 0.55856 0.02526 7 0.50000 0.80897 0.09928 7 0.50000 0.79792 0.09850 8 0.50000 0.50009 0.35768 8 0.50000 0.52180 0.35518 9 0.50000 0.80902 0.91898 9 0.50000 0.80748 0.91610 10 0.50000 0.50000 0.29782 10 0.50000 0.50307 0.30743 11 0.50000 0.80902 0.83650 11 0.50000 0.80899 0.85167 12 0.50000 0.50000 0.54707 12 0.50000 0.50006 0.50531 13 0.50000 0.80902 0.99114 13 0.50000 0.80902 0.99989 14 0.50000 0.50000 0.03514 14 0.50000 0.50000 0.00045 15 0.50000 0.80902 0.13561 15 0.50000 0.80902 0.00180 Figure 2 The results of computing xt in equation 1,starting from x0 = 0.4,for three different values of l. Figure 3 The results of computing xt starting from x0 = 0.35000.The numbers differing from those in Table 1 are in bold.

The surprise comes from the sequence in the column on the far right which you could have hardly guessed. It does not show any obvious pattern, and you might think that there was a mistake in programming, but there is no error. Successive values of x appear to be unrelated, but they are, and by just two multiplications and one subtraction ; you can check. This lack of a discernible structure is not a peculiarity of the first 15 data, it will persist as long as your computer can compute. This is chaos. Not just a pretty pattern; this beautiful and unusual computer generated picture shows what happens to equation 1 when l is allowed to alternate between two numbers A and B.The colours in the picture show the nature of the motion, ranging from the orderly (the dark area) to completely chaotic (lighter areas).A is the x-axis and B is the y-axis.

It is now clear that we are in possession of a remarkable model, whose simplicity contrasts with the variety of behaviour that it can produce. So far, only the feedback strength l has been changed, but there is more than one reason for changing the initial state as well. There was nothing special about x0= 0.4, and we should try other values. It is perhaps even more illuminating to vary the initial state by small amounts, in order to simulate small uncertainties or errors in the measurement of the initial datum, and assess their impact on the future evolution of the system.

The second experiment is a replica of the first, but with different values of x0. This time we obtain the data in Figure 3. For the first two values of l, the initial state was changed by 5 per cent. This would be a realistic uncertainty on the initial data, if somewhat large. The discrepancy in the early values of xt fades away rapidly (more so in the first experiment) , as the time evolution brings the system to the same final state. I invite you to take the time to try other values of x0 in the unit interval, to convince yourself that the resulting sequences are invariably attracted to the same final regimes, a single point in the first experiment, a pair of them in the second. No wonder these sets have been given the pictorial name of "attractors".

For l= 4, the initial state was changed by only one part in a 100 000, and we are entitled to expect a virtually identical replica of the previous experiment. But this time the error increases, and at a remarkable rate (show in bold in Figure 3). By the 15th measurement, it has contaminated all available digits, making our predictive power null. This is the signature of chaos. Had we doubled the number of digits of accuracy, in other words made the accuracy 100 000 times as great, we would just have postponed the problem for twice as long. There is no way out; at some point the results of the experiments are going to become meaningless.

We can make use of this property, and transform this process into an honest mathematical toss of a coin, where the value of x smaller or greater than 0.5 will mean "head" or "tail", respectively. Pick the value of x0 of your choice, and then place your bet on x20.
We have arrived to the core of the issue, the realisation that there are systems, even within mathematics, that are both deterministic and unpredictable. We cannot blame this failure on the influence of unknown factors, because there are none. It is rather the result of our own terminal inability to measure or represent the present with infinite precision.

People have suspected for a long time that chaos exists in dynamical systems, but it took computers to demonstrate it and assess the implications. The history of the logistic map provides a marvellous example of the interplay between theory and experiment within mathematics, something that was once a prerogative of the physical sciences.

 Period doubling and Feigenbaum numbers The first two columns of Figures 2 and 3 in the main text are examples of what are called periodic orbits. Chaologists like to use the geometrical language in describing changing values. So saying that an orbit is periodic just means that x eventually cycles back to its original value. In the case of the first column, once x settles down to a constant value, 0.5000. it returns to the same value after each successive step down the column. So the first column is said to have a period of 1. In general, we can say that if x returns exactly to its original value after T steps, the orbit has a period T and has only T distinct points. For the second column, then, the orbit settles down to period 2 as it alternates between 0.50000 and 0.80902. For any positive whole number T there are values of l, with orbits of period T, but their arrangement is extraordinarily complicated, as you will find out if you experiment with a computer. Among all these complications, there is one pattern that and over again, that is the pattern of "period doubling" The Figure below shows the periodic orbits for all values of l between 2.5 and 3.5700. As l increases, the period goes from 1 to 2 to 4 and so on, through all the powers of 2. The greater the period, the faster the period doubling becomes, and the smaller the distance between neighbouring points on the orbits. For period 2048, you would need a microscope to see the structure. The higher periods have another remarkable property which was analysed by Mitchell Feigenbaum in the 1970s when he was at Los Alamos in the US. That is the property of "renormalisation" When the periods are sufficiently high, the magnified fine structure for one orbit, period 2048 for example, is indistinguishable from the structure for the previous period, in this case period 1024, provided that you carry out the magnification to a precise specification: the magnification in l should be 4.66920166... and the magnification in x should be 2.502908... Feigenbaum found that the same numbers and the same structure appear for all sufficiently smooth functions f(x), whose graph has only one maximum. So these numbers, like p are universal. Period doubling and Feigenbaum numbers appear not only on the mathematician's computer screen but also in many kinds of natural chaos including the dripping tap and the beating heart. Ian Percival Computers as a tool for mathematics
Early theoretical results had already explained some qualitative aspects of the changing behaviour of the system as the parameter is varied (without using my awkward formulas though). They encouraged mathematicians to run extensive computations on their computers. These unveiled quantitative phenomena crucial for the understanding of chaos. A new mathematical theory originated from the computer experiments, and this process of symbiosis culminated in a computer- assisted proof of the main predictions regarding the transition between order and chaos.

Common folklore relegates mathematics to the theoretical sciences, where information and knowledge is reached by logical steps within an abstract framework, and not from experiments. In fact, mathematical discoveries are more likely to spring from the patiently acquired experience of many specific computations. The milestones of abstract thinking that have characterised the mathematics of our century have left little room for public display of the usefulness, and the charm, of mathematical experimentation.

Yet all great mathematicians of the past were eager computers, and felt little obligation to hide their calculations behind a polished façade of abstraction. Karl Freidrich Gauss, one of the greatest mathematicians, once refrained from disclosing a numerical table so as not to deprive the reader from the pleasure of computing it. The tendency towards experimentation has been particularly notable in number theory, where so many famous theorems have been inferred from numerical data, and even used before proofs were available.

Computers have added a new dimension to the experimental side of mathematics, and have made mathematical experimentation as fruitful and tangible as that of the physical world. This is particularly true in dynamics, because of the natural role of computers in iterative processes. Simple rules, like the logistic map, can disclose unexpected treasures when applied over and over again, but often the results materialise only after millions of operations.

The marriage between chaos and computers has even more profound roots. There is an intimate relationship between chaotic dynamics and the structure of the number system itself, which iteration helps bring to the surface, and which gives the science of chaos an appeal to fundamentals that few other sciences have. Extreme sensitivity of the initial conditions characterise a chaotic system. This mathematical experiment, brings the finest details of the numbers representing the initial state x0 into centre stage. In the final analysis, the key for understanding the future is buried within the arithmetical properties of those numbers.

I cannot help wondering what mathematics would be like if the human brain could perform 1012 arithmetical operations per second. It would certainly be very different, and so would be our mathematical description of the physical world. Machines of that speed are now being conceived. Mathematics will change.

 The Author Franco Vivaldi is in the School of Mathematical Sciences at Queen Mary and Westfield College, London. Further Reading R. H. Abraham and C. D. Shaw. Dynamics -the geometry of behaviour. Volumes 1-4, Aerial Press, Santa Cruz, California.

 Chaos Quantum Logic Cosmos Conscious Belief Elect. Art Chem. Maths

New Scientist  28 Oct 1989 File Info: Created --/--/-- Updated 5/4/2013 Page Address: http://leebor2.100webspace.net/Zymic/expmaths.html