A novel argument is presented, by which the phenomenon of “collapse of the wave function” is derived from two assumptions: that probability propagation is carried out by some process of finite computational capability, and that physical systems are governed by the most general probability logic that can consistently be applied to them. In brief, the argument I propose is as follows. Complex-number probabilities (a la Youssef) describe events that can propagate probability-values forwards or backwards in time. But complex dynamical systems (such as human brains, or measuring devices) lose information about their initial conditions as they evolve, and therefore in a complex dynamical system, one can no longer propagate probability-effects backwards in time – unless one has an effectively infinite computational capability at one’s disposal. Thus, if one assumes that probability propagation is governed by some observer/calculator with finite computational capability, one concludes that, where complex dynamical systems are concerned, one can no longer reason using complex probabilities, one has to reason using real probabilities. Therefore, when a quantum system becomes correlated with a complex dynamical system, it “collapses,” meaning that it is only explicable in terms of real probabilities.
exotic probabilities, quantum measurement, algorithmic information, chaos
This note presents a novel approach to the problem of quantum measurement, which is based on connecting elements from several different areas: nonlinear dynamics, computation theory, exotic probability theory, and the quantum theory of measurement. The discussion is heuristic rather than fully rigorous – the goal is to present a new way of looking at the problem, rather than to give a fully polished and packaged “solution.”
A brief version of my argument here runs as follows.
1. I assume that any system in the world must be modeled using some form of probability theory; and based on mathematical considerations, the only forms that have acceptable properties are real, complex, quaternionic and octonionic probability theories
2. I assume that any given system should be modeled using the most general probability theory that can consistently be applied to it
3. The quantum domain is best modeled using complex-valued probabilities
4. Complex-valued probabilities naturally lead to a world-model in which systems can propagate probabilities both forwards and backwards in time
5. Therefore, any system that cannot propagate probabilities both forwards and backwards in time cannot consistently be modeled using complex-valued probabilities (nor quaternionic or octonionic ones, as these are a superset of complex ones), and must be modeled using real-valued probabilities instead
6. Chaotic dynamical systems lose information over time, and after a certain period of time they use up all the information in their initial conditions, so they can’t possibly be time-reversible. Or, to put it more precisely: From the point of view of any observer/calculator with finite computational capability, there are many chaotic dynamical systems so that reversing the dynamics of those systems is an unsolvable problem.
7. Therefore, if one is calculating probabilities regarding a chaotic dynamical system – and one is an observer/calculator with finite computational capacity -- one must manipulate these probabilities using real probability theory
8. While quantum theory as conventionally formulated does not support chaos in the standard sense, if one considers a discrete, computable analogue of quantum theory, it is easily seen to display a discrete, computable analogue of chaos, which possesses the irreversibility property mentioned above
Thus, we have the conclusion that quantum measurement consists of the registration of a quantum event in a chaotic dynamical system – the irreversibility of whose dynamics renders complex-probability modeling illogical for any finitely-capable observer/calculator, making real-probability modeling a necessity for that observer. The collapse of the wave-function is viewed as a consequence of the collapse of time-symmetric probability propagation.
There are two mysterious things here. The first lies in point 2 of the argument above. Why does the universe work in such a way that each system must be modeled using the most general probability theory that can be applied to it? I don’t propose any resolution to this question here; my goal is merely to argue that, if we make this assumption, then the “peculiar” nature of quantum measurement falls out naturally (along with, as Youssef has shown, the rest of quantum mechanics).
Another way to frame this point is: I don’t try to explain why the quantum world is strange. What I try to explain is why, if the quantum world is so strange, the world we see around us is not so strange. The reason is that weirdness of chaos squelches the greater weirdness of the quantum world.
The second mysterious thing is the role of the so-called “observer/calculator” with finite computational capability, mentioned in steps 6 and 7. This finitude is necessary in order for the reversal of chaotic systems to be illogical. This “observer/calculator” is not an observer in the sense of quantum measurement – he’s not necessarily collapsing anything – rather he’s mediating the transmittal of (potentially complex) probabilities forwards or backwards through time. To avoid confusion with the term “observer” as typically used in the quantum theory of measurement, from here on I’ll call this observer/calculator an “agent” or, more fancifully, a “Dynamics Master.”
A further subtlety in the above argument, technically, lies in the relationship between chaos and quantum theory. Quantum theory as typically formulated does not allow chaos as typically formulated – because quantum mechanics portrays dynamics in terms of the iteration of linear operators, whereas chaos is a phenomenon that arises via the iteration of nonlinear operators. The field of “quantum chaos”  has wrestled with this problem in various ways. My approach is to deviate both from the ordinary definition of chaos, and from the ordinary way of thinking about quantum dynamics, and to project both chaos and quantum dynamics into a discrete, computational domain. Chaos is reformulated as a property that computational systems may possess relative to particular computational observers; and according to this definition of chaos, there is no problem with computational approximations of quantum dynamics giving rise to chaotic behavior. This approach to chaos makes the above argument about chaos and measurement work, but introduces an interesting philosophical subtlety: in order to resolve the measurement problem, we need to do all our analysis in the context of the universe as understood by some particular “reference computational system.”
2. Quantum Theory and Exotic Probability Theories
The approach to quantum measurement presented here makes use of the notion of “quantum probability theory,” as developed in a series of papers by Saul Youssef [2,3]. In fact, my arguments about computational chaos, finite computational capacity and quantum measurement could potentially be presented independently of Youssef’s approach – but Youssef’s work provides a simple and elegant connection between chaotic irreversibility on the one hand, and quantum dynamics on the other. If Youssef’s work were removed from the argument, it would be necessary to find some other way to justify the implication “irreversibility implies no quantum dynamics,” and I have no alternative ready at hand.
Youssef’s work relies on the notion of exotic probability theories. In ordinary probability theory, one assigns probabilities to events and then manipulates these probabilities using special arithmetical rules. Probabilities are numbers between 0 and 1, as in “there’s a .5 chance of rain tomorrow” or “there’s a .9 chance that Ben will say something obnoxious in the next 55 minutes.”
Ordinarily probabilities are real numbers, but one can also develop an alternate probability theory, which is also completely mathematically consistent and obeys the standard axioms of probability theory (except for the axiom that states probabilities are real numbers), in which probabilities are complex numbers. So we might say “there’s a .5 + .6i chance that the electron will pass through this portion of the diffraction grating.” This seems intuitively rather peculiar – but Youssef has shown that, if we consider probabilities as complex numbers, then after some fairly natural mathematical manipulations, the Schrodinger Equation falls out as a natural consequence.
Quantum probability theory provides a beautiful crystallization of the true weirdness – from a human-intuition perspective – of quantum reality. One must reason about the probabilities of observed phenomena using real probabilities. But, one must reason about the probabilities of unobserved phenomena using complex probabilities. Now, of course, one can never verify directly that these unobserved phenomena are actually obeying the conclusions of complex-probability inference. But assuming this allows one to make very simple explanations of the real-probability-tables one creates from tabulating the results of observations. So the simplest known explanation of the real probabilities one observes via one’s laboratory equipment, is that when one isn’t looking, the world is acting in a way that can only be predicted via complex probabilities! Somehow, complex probabilities are the logic of the unknowable, whereas real probabilities are the logic of the knowable.
Youssef’s mathematics also leaves room for yet more exotic probability theories – involving not just the complex numbers, but abstract “numbers” drawn from the quaternionic and octonionic algebras. As Geoffrey Dixon  and others have observed, the octonionic algebra has all sorts of subtle relationships with modern physics, including the Standard Model of strong and electroweak forces as well as general-relativistic gravitation. It seems possible that octonionic probabilities are part of the key to finding unified model of quantum theory and general relativity, but obviously this is a pure speculation at present.
3. Computational Chaos and Irreversibility
Now I will introduce another thread into the discussion – the notions of chaos and computational chaos, and their relationship to the arrow of time. The arrow of time is, I will argue, what fundamentally distinguishes macroscopic systems from quantum systems, and what leads to the need to apply real-valued rather than complex-valued probability theory to the former.
A chaotic dynamical system, informally speaking, is one whose dynamics are so unpredictable as to look random. A complex dynamical system – the more interesting kind, and the more common kind in reality – is one that has some chaotic aspects, but which has some predictable patterns to its dynamics even though it’s not predictable in detail.
There is a formal mathematical definition of chaos ; however, this definition is applicable only to continuous-variable nonlinear dynamical systems. The limitations of this definition have created a number of confusions. For one thing, most research work on chaotic systems involves computer simulations, which are discrete by nature and therefore can never truly demonstrate formal mathematical chaos. Also, quantum systems can never truly be chaotic according to ordinary quantum physics, because quantum theory portrays dynamics as linear (although, as Zurek  has shown, aspects of the states of quantum systems may evolve chaotically even though their states as a whole cannot). The field of quantum chaos essentially studies quantum systems that, when you take their “classical limit,” yield chaotic classical systems.
In order to make my argument here regarding chaos and quantum measurement, I will introduce an eccentric definition of chaos, based on computation theory. I take as the basis for my definition the intuitive notion that chaotic systems are difficult to predict, so that if a given finite observer has knowledge of the state of a chaotic system at time T, the amount of derived knowledge the observer has about the state of the system at time T+N deteriorates rapidly with N.
Suppose we have a specific discrete dynamical system S, controlled by an iteration function F. Let us consider the states of S, at various times, as belonging to a metric space, using a metric that measures distance in bits (for instance, following Chaitin , we may define the asymmetric distance between state A and state B, h(A,B), as the length in bits of the shortest self-delimiting program required to compute B from A; and we may then define d(A,B) = [ h(A,B) + h(B,A) ]/2 ). Let us also consider the space of dynamical functions controlling dynamical systems as belonging to a similar metric space.
Suppose an agent (an “observer/calculator”) knows the state of S at each time point T with an error of roughly x bits, and knows the dynamical rule F with an error of y bits (where we may have x=0 or y=0). Furthermore, suppose this agent wants to model the system S, but has only a limited fund of computational resources to do so. And, for simplicity, suppose that the state of S, at each point in time, contains about the same number of bits of information (in the sense of minimum description length; this assumption is not necessary, it just makes the discussion simpler).
Suppose our agent tries to “model” S by creating a program P that allows it to reconstruct the state of S at time T+N from the state at time T, for any T and some fixed N. This program P is allowed to use the agent’s knowledge of the state of S at time T and it’s allowed to use the agent’s knowledge of the iteration function F – and that’s it, no additional knowledge may be used. The program P must work within limited computational resources: it can’t be much longer than the length of the shortest self-delimiting program that computes the iteration function F. Specifically, we may say it shouldn’t be longer than the length of the shortest program for computing F, plus log(N), plus some small constant – this is the length of the program saying “repeatedly apply F to some input N times.” The reason for restricting the size of the program P is to prevent the observer from just creating a program P that lists the complete trajectory of the system S over time.
We then say the system F is computationally chaotic if, for x > 0 and y=0, the average error incurred by the optimal P increases exponentially with N. The system F is computationally unstable if, for x=0 and y>0, this error increases exponentially with N. These definitions apply to discrete systems and to quantum systems, as well as to continuous-variable nonlinear-dynamical systems. It is fairly easy to show that nonlinear-dynamical systems that are chaotic according to the standard definition, are computationally chaotic; and I conjecture that quantum systems that yield chaotic systems in the classical limit, will generally wind up to be computationally chaotic also.
Computational chaos explicitly speaks about prediction of a system’s future, but it’s not hard to see that it applies equally to prediction of the system’s past. In a computationally chaotic system, an agent with finite computational resources, observing the system at time T but not gaining complete knowledge about the system’s state, will not be able to reconstruct the system at past times, except very recent past time. In this sense, computationally chaotic systems are irreversible. Note that in any reasonably complex quantum system, there is no way for an observer to gather complete information about the system at any point in time due to the limitations the indeterminacy principle places on the process of measurement. Therefore, if a quantum system is computationally chaotic, there is definitely no way to get around this by pushing the measurement error toward zero. Computationally chaotic quantum systems are definitely irreversible with respect to any observer.
The notion of computational chaos would seem to apply to most real-world “complex systems” – for instance, it seems intuitively clear that systems like human beings, societies and cells are computationally chaotic. Because of the chaos phenomenon, each moment of time that a chaotic or complex system evolves, it loses information, in a sense. This loss of information is the arrow of time; it is the information-theoretic version of the Second Law of Thermodynamics. Of course, this information loss is often compensated by a pattern gain. Complex systems lose information about their initial conditions, which makes reversibility impossible; but they often do so by converging to “attractor” states, which embody interesting emergent patterns.
Now comes the biggest leap in my argument. I propose that quantum dynamics does not apply to highly computationally chaotic systems. My reasoning is as follows. Quantum dynamics implies that (complex) probabilities propagate both forwards and backwards in time – in the sense that (complex) probabilities at time T can affect (complex) probabilities at time T-N. However, in a computationally chaotic system, this is not possible – unless one assumes that the process propagating the complex probabilities (the observing/calculating agent or “Dynamics Master”) has an effectively unlimited computational capacity. If the propagation of complex probabilities is done by a process of bounded algorithmic information, then there is no way it can figure out what (complex) probabilities at time T-N are implied by the given (complex) probabilities at time T.
Therefore, complex probabilities and the quantum dynamics that follows from them cannot apply to this type of system: they are logically senseless in this context. Therefore, according to the principle articulated above, we must apply real probabilities to these systems – leading to the well-known principle of wave-function collapse.
4. Chaos and Finite Computational Capacity As the Roots of Quantum Measurement
Now I will reiterate the basic argument drawing the threads of the previous two sections together.
Our universe of macroscopic objects and events operates within a realm of real probabilities – and in order for some quantum event to register as observable within our real-probability realm, it must transform itself into a real-probability event as well. My proposal is that this is related to the way we experience time – we experience it as moving forward in one direction; whereas in the quantum domain, (complex)-probabilistic implications can move backwards as easily as forwards, as illustrated by the Aspect experiment, Wheeler’s delayed-choice double-slit experiment  and numerous other results. As Youssef has shown, complex probabilities allow us to reason accurately about events that have impact both forward and backward in time. Real-probability inference, on the other hand, seems to work only when one lacks the nonlocal quantum interdependency that allows time-symmetric “causation.”
And so, the argument I propose is as follows. Complex number probabilities describe events that can propagate probability-values forwards or backwards in time. But complex dynamical systems lose information about their initial conditions, and therefore in a complex dynamical system, one can no longer propagate probability-effects backwards in time – unless one has an effectively infinite computational capability at one’s disposal. Thus, if one assumes that probability propagation is governed by some Dynamics Master observer/calculator with finite computational capability, one concludes that, where complex dynamical systems are concerned, one can no longer reason using complex probabilities, one has to reason using real probabilities. Therefore, when a quantum system becomes correlated with a complex dynamical system, it “collapses,” meaning that it is only explicable in terms of real probabilities.
Along with the assumption of the finite Dynamics Master, the key assumption here is the axiom that one must describe any physical system using the most general probability theory that can consistently be applied to it. Once a system is rendered irreversible via computational chaos, it can no longer be consistently reasoned about using complex probability theory which implies time-symmetric causation. Therefore one must apply real probability theory, so the wave function must “collapse.”
Even if it’s correct, this point of view doesn’t “explain” everything – it doesn’t explain why reality is time-reversible and complex-probabilistic in the first place; and it doesn’t explain why dynamics should be controlled by a finite Dynamics Master. What it does explain is why, if fundamental reality is time-reversible and complex-probabilistic but controlled by an algorithmically finite process, our observable reality is time-irreversible and real-probabilistic. It’s because of the chaotic aspect of our complex dynamics.
The chaotic aspect of complex dynamics appears to come along with attractor formation – i.e. with the formation of emergent patterns in complex systems. Emergent patterns require chaos which entails irreversibility which collapses wave functions. Therefore the observable universe of patterns in complex systems will appear to follow real probabilities even if the hypothetical “underlying” quantum world involves complex ones.
I’ve presented some radical heuristic ideas that synthesize ideas from different disciplines, giving a speculative but novel explanation of why quantum measurement is as it is. So where does one go from here? In order to make these ideas more rigorous, it seems, it is necessary to re-run the argument given here in the context of a coherent purely-computational version of quantum theory. One needs to re-run Youssef’s arguments about complex probabilities in the context of this computational quantum theory, and then connect these arguments with computational chaos. None of this seems objectionably difficult, but there is clearly a nontrivial amount of mathematical calculation and artful mathematical definition required.
It’s worth noting that, if these ideas are correct, they have interesting implications for the future of quantum computing. They imply that complex dynamical systems are necessarily “classical” in nature because they involve irreversible, chaotic dynamics which can only be reasoned about using real probabilities. However, this doesn’t mean that complex dynamical systems can’t be designed to use dynamically simple subcomponents that make use of quantum-dynamical computation-acceleration magic in predictable ways. It does suggest that there may never be a purely quantum cognitive system – because cognition relies on emergent pattern , and emergent pattern relies on complex dynamics, which relies on irreversibility, which kills time-symmetry and ergo kills complex probabilities and quantum weirdness. But nothing in this line of argument rules out the possibility of an impurely quantum cognitive system, able to dip strategically into the quantum domain for complex-probability powered calculations.
 As usual, when I use the word “complex” in the context of dynamical systems, I don’t refer to complex numbers; rather, I am referring to a system whose dynamics are subtle and difficult to understand. This polysemy of the word “complex” in science is confusing, but has become conventional.