A PORTRAIT OF CHANGE – PART ONE

The essence of mathematical chaos – that it is devilishly difficult to make long-term predictions about certain systems – usually takes the form of a colourful metaphor: the flap of a butterfly’s wings in Brazil could set off a tornado in Texas. The Butterfly Effect, as this idea has come to be called, has behind it the reasoning that the flap of a butterfly’s wings could cause minute changes in air currents which would blow up, given time, and and set off a chain of other atmospheric events that would ultimately result in a tornado.

The subtext is that there is no way to predict this tornado because there is no way to account for all the minute variations in air currents that could lead to its formation – a butterfly’s wings, heck, even a rat’s fart could cause some tiny local change in the atmospheric conditions. Mathematicians have a more exact and considerably less poetic name for the phenomenon – Sensitive Dependence to Initial Conditions – but we’ll come to that in time. Let’s stay with the metaphor for now.

The Butterfly Effect had its origins in a typo by Edward Lorenz, a mathematician cum meteorologist who spent a lot of his time making simplified mathematical models of the weather and simulating them on an ancient computer. His models would accept data about the usual atmospheric variables: pressure, temperature, humidity and so on, perform a series of calculations on them – to be more exact, it would solve a system of differential equations with the input data as initial conditions, and would print a graph of how these variables evolved over time.

One day, he felt the need to run the simulation for a particular set of initial conditions again – only this time, he took a shortcut to save time. Instead of entering the initial conditions as he had originally entered them for the previous run, he looked at the printout of the previous run, entered the values of the variables at… say t = 120000 seconds as initial conditions for the present run, and went out for a cup of coffee.

His reasoning was sound enough – suppose you dropped a ball from the 30th floor balcony of a building, and found that when it passed the 20th floor its downward velocity had become some v m/s and that it hit the ground at some point x. Now, if you were to go up to the 20th floor balcony and drop the ball again, this time giving it a velocity of v m/s (the same that it had at this point when it was thrown from ten floors upstairs), you don’t expect it to fall on the ground at a different point 10 metres away from x.

And yet, when Lorenz came back from his break an hour later, he found that something just as counter-intuitive had happened in his simulation. The results, which should have been the same or at least similar to those of the previous run, were drastically different. He realized what had happened some time later: the computer had space in its memory for six decimal places; the printout however, had numbers rounded off to three decimal places. While entering the initial conditions for the simulation, he had entered .506 from the printout, when he should have entered something like .506127 – an error of 0.000127 in the initial conditions was resulting in two completely different weathers, as depicted in Figure 1.

lorenz1

Figure 1. Sensitive dependence on initial conditions: Graphs of the two runs superimposed on each other. X-axis is time, Y-axis is an atmospheric variable. Notice how the plot shapes are similar at first but begin to look drastically different as time progresses. Source – Chaos: Making A New Science, by James Gleick.

Much of the hullabaloo about sensitive dependence arises from the challenge it poses to one of the foundational philosophies of classical science – determinism, which says that the future can be uniquely described using the present by subjecting it to natural scientific laws. A useful example would be the ball we talked about above – if you were to drop a ball now, you would know that it would inevitably fall down and that its path would approximate a straight line; if you were to throw it forward, you would know that it would still reach the ground, but this time it would trace a parabola, i.e. you can predict the future state(s) of the ball based on its present state. There was a time when all physical systems were thought to be deterministic, but we now know that that is not the case: the motion of particles suspended in a fluid is random, quantum mechanics is inherently probabilistic, and so on. Both random and chaotic systems are unpredictable, but the devil, as is his habit, lies in the details.

Random systems are unpredictable because they do not obey any deterministic laws – they are the equivalent of Harvey Two-Face deciding whether to kill you or spare you by tossing a fair coin. They can have two different final states for the same initial state. Chaotic systems, on the other hand, are unpredictable despite the fact that they obey deterministic laws; in fact, one could argue that they are unpredictable because they obey these deterministic laws. Given a set of initial conditions, there can only be one final condition, and this final condition follows directly from physical laws. Unpredictability arises, as in the Lorenz model, from our inability to specify the set of initial conditions accurately. One of the implications of Lorenz’s work on chaos was that long-term prediction of the weather is effectively impossible because there is no way for us to know the complete state of the atmosphere at any point in time, even though the atmosphere is a glorified fluid and obeys the known laws of fluid dynamics. The following is a wonderfully succinct definition of chaos attributed to him:

“Chaos: When the present determines the future but the approximate present does not approximately determine the future.”

There is a bit more to the concept of sensitive dependence (and indeed, chaos) than meets the eye, but before we get into that, I should probably define some terms I’m going to be using.

1. A dynamical system is, in lay language, any object or collection of interacting objects that is undergoing a change. A ball falling from a tower is a dynamical system, water flowing from a tap is a dynamical system, and so is the stock market. The study of chaos is in fact a subset of the larger and more profound area of study called Dynamical Systems Theory. Dynamical systems are modelled using mathematical equations and sometimes the equations themselves are called dynamical systems.

A more mathematically precise definition for dynamical systems would be: a set of states, together with a rule that determines the future state in terms of the present state. A state, simply put, is a set consisting of values of variables that are of interest in a system. The state of a ball falling, for example, can be described by two variables: the height from the ground and the vertical velocity.

Dynamical systems can be modelled using either differential equations or iterated maps, AKA difference equations. Iterated maps tend to be simpler and usually look like x_{t+1} = x_{t} + 2, where x is the state variable, x_t represents the state at the present instant of time, and x_{t+1} denotes the state of the system at the next instant of time. We place one additional constraint on dynamical systems: that for a given present state, the future state should be unique, i.e., the system should be deterministic.

2. An orbit, then, is nothing but the collection of states that the system passes through as time progresses, given a certain initial state. Iterated maps work just like ordinary mathematical functions – they take in a number and spit out another. The only difference is that with these maps, I would keep numbers running through them repeatedly, and the output of one such step would become the input for the next step. Suppose I applied the map defined above to an initial state x_0 = 1, the sequence of states it passes through, i.e its orbit, would be:

x_0=1

x_1=3

x_2=5

x_3=7

x_4=9

A naive way to measure dependence on initial position is by looking at the distance between the orbits of two initial points. This can be done easily by plotting a graph of the state variable vs. time (this type of graph is called a time series) for both initial conditions, and seeing how their respective orbits behave – is the gap between the plots constant? Does it increase or decrease? And so on.

With the system x_{t+1} = x_{t} + 2, it is easy to see that two points that start close together, say x_0 = 1 and x_0 = 1.5, will remain exactly the same distance from each other after each iteration of the system. The time series of the system, shown below in Figure 2, confirms this. We can safely say that this system is not very sensitive to initial conditions.

it1

Figure 2. Time series of x_{t+1} = x_t+2: Orbits with initial conditions x_o = 1 and x_0 = 1.5 are plotted together. The gap between the orbits remains constant throughout.

Consider another system – something like x_{t+1} = x_{t}^2. When the procedure above is repeated with this system, we get Figure 3.

sqit

Figure 3. The dynamical system x_{t+1} = x_{t}^2: The plot on the left is an ordinary time series – orbits plotted have initial conditions x_0 = 2 and x_0 = 2.05. It is a little hard to see from the time series exactly how far apart these orbits grow, hence the graph on the right – where the difference between states is computed for each iteration and plotted as a function of time. The difference between states, as can be seen, increases really fast. 

 In just 9 iterations, points which had a difference of 0.05 initially, have grown to have a difference in the order of 10^{79} – take a minute to wrap the number around your head. If that horde of zeroes isn’t indicative of a tornado in Texas, I don’t know what is.

And yet – this is a misconception that plagues most popular accounts of chaos ‘theory’ – sensitive dependence on initial conditions alone does not imply chaos. This is important and worth repeating – sensitive dependence is an indicator of chaos, but it isn’t the only one. There are dynamical systems that are sensitive to initial conditions but by no means chaotic. The system plotted above, we’ll call it the x^2-map, happens to be one of them.

To understand how it differs from a truly chaotic system, let’s examine a relatively simple one called the logistic map:

x_{t+1} = kx_{t}(1-x_t)

As you might have guessed from the fact that this map has a name and everything, it is particularly important and frequently comes up in introductory descriptions of dynamical systems. The logistic map shows a surprisingly rich range of dynamical behaviour which depends upon the value of the parameter k. However, a detailed discussion of this behaviour will have to wait for a future instalment in this series. For now, we will concern ourselves with the case k=4, which is when it becomes fully chaotic. Figure 4 depicts the time series of the logistic map when k=4.

logi1

Figure 4. Time series of the logistic map when k=4: Twenty iterations of the logistic map have been plotted and the orbits have initial conditions x_0 = 0.4 (blue) and x_0 = 0.4005 (black). Notice how the orbits practically touch each other for the first six iterations and dance around erratically afterwards.

Now, both the logistic map and the x^2-map are sensitive to initial conditions, but only the logistic map is chaotic. The difference, you’ve probably figured out, is that the difference between orbits in the x^2-map behaves predictably. We know that it will increase after every iteration, and with a bit of calculation, we can even find out how much it will be after a specific number of iterations – no chaos here. The logistic map, on the other hand, is a holy mess – at the tenth iteration (t=10), it looks like there is a pattern to how the black and blue plots will behave, but in the next iteration itself the black plot goes one way, the blue plot another, starting off what looks like a drunk tango between the two orbits. There is even some semblance of order in between – the black orbit oscillates twice around the same two points from t=14 to t=18, only to wander away again afterwards. Indeed, in any chaotic system, you are almost certain to find pockets of data that appear to conform to a pattern – appear being the key word here. The takeaway is, I repeat, we simply can’t tell how small errors will affect the system. As Lorenz said in the 1972 paper that gave the butterfly effect its name:

“If the flap of a butterfly’s wings can be instrumental in generating a tornado, it can equally well be instrumental in preventing a tornado.”

Now, I’ve stated that plain old sensitive dependence has squat to do with chaos – it must be laced with unpredictability, but it so happens that the unpredictability of chaos has a certain…character to it. Chaotic systems tend to be unpredictable in a very specific way, and in fact it is this unique manner of unpredictability that defines chaos, not just sensitivity to initial conditions. This specific manner of unpredictability, along with an overview of some common dynamical behaviours will be the subject of the second article in this series.

to be continued…

References

1. Chaos: Making A New Science, by James Gleick.

2. Nonlinear Dynamics and Chaos, by Steven H. Strogatz.

3. Chaos: An Introduction to Dynamical Systems, by Kathleen T. Alligood, Tim D. Sauer, and James A. Yorke.

4. Simple mathematical models with very complicated dynamics, by Robert M. May.

5. Predictability: Does the Flap of a Butterfly’s wings in Brazil set off a Tornado in Texas? by Edward N. Lorenz

Advertisements