From Complexity to Creativity -- Copyright Plenum Press, © 1997 |
Part II. Formal Tools for Exploring Complexity
CHAPTER 7. MAGICIAN SYSTEMS
CHAPTER SEVEN
MAGICIAN SYSTEMS AND ABSTRACT ALGEBRAS
Many systems theorists have stressed the "reflexive," "autopoietic" or "self-producing" nature of complex systems (Varela, 1978; Kampis, 1991; Goertzel, 1994; Palmer, 1994). This literature is intriguing and inspiring, and contains many detailed mathematical investigations. On the whole, though, a really useful mathematical model of the concept of reflexive self-production has never emerged. Varela's (1978) "Brownian Logic" may be considered a candidate for this role, as may be Kampis's "component-systems theory." But these theories are more conceptual than operational; they are not easily applied to obtain useful answers to specific questions about specific complex systems.
The goal of this chapter is to propose one possible path along which an operational mathematics of autopoiesis might be developed. I will take the simple notion of a magician system, introduced in Chapter One, and develop it into a more complete mathematical theory.
The magician system model is very general: it contains variants on the genetic algorithm, and also the psynet model of mind. Despite its generality, however, it is not without substance. It enforces a particular way of thinking about, and mathematically analyzing, complex systems.
I will explore both combinatorial and geometric perspectives on magician systems. First I will review how magician systems can be given a geometric formulation, in terms of directed hypergraphs. Then, for the bulk of the chapter, I will turn to algebra, and discuss how special cases of the magician system model can be formulated in terms of iterations on abstract algebraic structures. I will show by a detailed construction how the dynamics of an autopoietic system can be interpreted as a quadratic iteration on an abstract algebra. In the simplest case this algebra is a space of hypercomplex numbers. This construction implies a fundamental system-theoretic role for Julia sets and Mandelbrot sets over algebras -- for hypercomplex fractals and their yet more abstract cousins.
The conclusion that complex system dynamics can be expressed in the language of fractals over algebras has profound evolutionary consequences. For, evolution is often phrased in terms of the maximization of some "fitness criterion." But if the viability of a complex system depends on its position in certain hypercomplex Julia and Mandelbrot sets, then it follows that, in the evolution of complex systems, the fitness criterionis a fractal. If this is true, then the whole business of evolutionary fitness maximization is nowhere near so straightforward as has generally been assumed. Gregory Sorkin (1991) has provided the beginnings of a theory of simulated annealing on fractal objective functions; but we do not as yet have any theory at all regarding crossover-driven genetic optimization on fractal objective functions.
7.2 MAGICIAN SYSTEMS AND GENETIC ALGORITHMS
It is worth pausing for a moment to reiterate the mathematically obvious fact that the genetic algorithm is just a special kind of magician system model. Thus, the ideas of this chapter are a kind of generalization of the ideas of Chapter Six. For example, the "infinite population" theory given there could, in principle, be applied to other types of magician systems as well; however, the calculations would become even more difficult.
In the GA the magicians are the genotypes -- bit strings, real vectors, or whatever. The magician interactions are crossover: A acts on B to produce the offspring of A and B. The random nature of the crossover operator means that we have a stochastic magician system. The selection according to fitness may be modelled in several ways, the simplest of which is to include the environment itself as a magician.
To implement the standard GA, with survival proportional to fitness, the environment magician E acts on other magicians in the following way. First it acts on all the other magicians, in such a way that E*A = E', where E' is a modified E which contains information about the fitness of A. Then, having accumulated this information, it acts on the magicians in a different way: E*A = A if E chooses A to survive, whereas E*A = -A, where -A is the magician that annihilates A, if E chooses A not to survive.
Of course, an environment magician E acting in this way is a somewhat contrived device, but this is because the selection mechanism of the simple GA is itself rather artificial. A more natural artificial life model would make survival dependent upon such operations as food consumption, location of shelter, etc. And these operations are very naturally representable in terms of magician action, much more so than the artificial mechanisms employed in the simple GA. The magician system model tends to push the GA in the direction of biological realism rather than (serial-computer) computational efficiency.
In the language of genetic algorithms, an "structural conspiracy" is nothing but a schema. The evolution of schemas was discussed in the previous chapter, in the context of the evolution of strange attractors for plane quadratic maps. A schema is a collection of similar individuals which tend to produce each other, under the crossover operator (i.e., under magician dynamics). The evolution of a GA population is commonly viewed as a path from broad schemas to narrow schemas, to yet narrower schemas. In dynamical-systems terms, this is the movement from one autopoietic subsystem to another attractorwhich is a subset of the first, to another attractor which is a subset of the second. From this point of view, the dynamics of the genetic algorithm are not peculiar at all, but are rather indicative of forces at play in a much broader class of complex, self-organizing systems. The genetic algorithm is a paradigm case of the interplay between structure-maintenance (autopoiesis; schema) and and adaptive learning (fitness-driven evolution).
Finally, it is worth noting that true autopoiesis, in the sense of spatial structure maintained by self-preserving dynamics, cannot appear in the GA which is non-spatial, but it can and does appear in the SEE model. Ecology transforms self- producing evolutionary subsystems into autopoietic subsystems.
Magician systems are a new mathematical entity; however, they have many interesting connections with familiar mathematical concepts. The remainder of this chapter will explore these connections, beginning with graphs, then moving to hypercomplex numbers and more complex abstract algebras, and nonlinear iterations on algebras.
An ordinary graph in the sense of discrete mathematics -- a collection of dots and lines -- is not sufficiently general to model a magician system. Instead, if one wishes to represent magician systems graphically, one needs to introduce the dihypergraph: a digraph in which each edge, instead of joining two vertices, joins three, four or more vertices. Formally, a dihypergraph consists of a collection V of "vertices," and a collection E of "hyperedges," each of which is an ordered tuple of elements of V. The length of the largest tuple in E is the "maximum edge size" of the dihypergraph.
It is easy to see how a magician system is a dihypergraph. If A and B combine to produce C, then one draws a directed edge from A and B to C. If A, B and C combine to produce D, then one draws an edge (a directed hyperedge) from A, B and C to D. The ordered nature of the edges in the dihypergraph allows the noncommutativity of magician action: A can act on B, or B can act on A, and these need not yield identical results. Stochastic magician actions, whereby A can act on B yielding a variety of different possible results, can be modelled by labelled dihypergraphs, in which the label of an edge indicates the probability.
There are two ways to study ihypergraphs: on a case-by-case basis, or statistically. Here I will take the statistical approach; using some standard ideas from the theory of random graphs, I will look at random dihypergraphs. Technically, random graph theory applies only to graphs that are selected from sample spaces according to simple probabilistic models. However, several recent results (Chung and Graham, 1990) indicate that theorems obtained for random graphs can often be extended todeterministic graphs called "quasirandom" graphs without too much difficulty.
Using random dihypergraph theory, I will show that "phase transition"-like connectivity thresholds must exist for random magician systems. Inspired by this result, I will hypothesize that, in many circumstances, magician systems can only maintain autopoiesis in the neighborhood of the threshold. This implies that, in order to survive in the world, real magician systems must evolve the appropriate value for one of two parameters: either size or connection probability.
This idea, albeit speculative, is closely related to another concept which has attracted a great deal of attention in recent years: the "edge of chaos." A number of researchers (Packard, 1988; Langton, 1986) have come to the conclusion that complex systems are in a sense poised between order and chaos. The ideas given here suggest that this "edge of chaos" may be partially understood in terms of the threshold functions found in random graph theory.
Random Graphs
The basic fact about random graphs is the existence of thresholds. For example, suppose one has a collection of n vertices, and one connects these vertices with edges by choosing each edge independently with probability p = c/n. Then Erdos and Renyi (1960) showed that, for large n, the structure of the ensuing graph depends very sensitively on the value c. If c<1, then the graph almost surely has no components larger than O(logn). If c>1, on the other hand, then the largest connected subgraph almost surely contains all but O(logn) vertices. For the borderline case c=1, the largest connected subtraphs almost surely contains O(n2/3) vertices.
This result is only the beginning. Choosing p = c logn/n, one finds disconnected graphs for c<1 and connected, Hamiltonian graphs for c>1. Choosing p just a little bigger, one finds graphs with arbitrarily large connectivity and minimum degree. Specifically, to get connectivity and minimum degree d, one must take
p = (logn/n)[1 + (d-1)loglogn/logn + wn/logn] (4)
where wn tends to 0 arbitrarily slowly (Erdos and Renyi, 1960). And if one chooses p this way, then as a bonus one gets graphs which almost surely have every specified degree sequence d1 < d2 < ... < dk < d+1 (Bollobas, 1985).
And connectivity is not the only graph property for which such a threshold exists. Bollobas (1985) defines a monotone graph property as any property which becomes more likely when one adds more edges to a graph, but keeps the number of vertices constant. He shows that every monotone property undergoes a "phase transition" at some point, just like connectivity does. Thresholds are not a fluky property of some specific mathematical function, but a basic conceptual, system-theoretic fact.
Random Dihypergraphs
Compared to the vast literature on random graphs, there has been surprisingly little work on random digraphs or hypergraphs. Lukasz (1990) has shown that the p=c/n threshold holds for digraphs; and Schmidt-Pruzan and Shamir (1983) have obtained a similar result for hypergraphs. Two of the authors (Goertzel and Bowman, 1993) have shown that the threshold for digraph connectivity also governs the behavior of fixed-length walks on random digraphs. Finally, the threshold existence results of Bollobas (1985) are basically set-theoretic in nature and can easily be seen to apply even to general random dihypergraphs.
Given the relative absence of work pertaining to random dihypergraphs, it is fortunate that many properties of random digraphs can be carried over to random dihypergraphs in a fairly straightforward manner. Connectivity is a prime example. To see how this works, one need only observe that a dihypergraph H induces a digraph G in a natural way. Namely, draw an edge from v to y in G iff there is an edge in H that involves v and points to y. This sort of "induction" is obviously a many-to-one relationship, in that the same digraph is induced by many different dihypergraphs; but this multiplicity need not be problematic. I may consider a dihypergraph to be "connected" if its induced digraph is connected, i.e. if for any v and y there is some edge leading from v and some set of vertices to to y
Suppose that I construct a random dihypergraph H by selecting each t-edge with probability
p* = c/nt-1 (5)
Then a simple calculation shows that each edge in the induced digraph G is chosen with probability c/n. So as c passes through the value 1, there is a bifurcation in the structure of the induced digraph. And this bifurcation, in itself, tells us something about the structure of the dihypergraph H.
First of all, if c<1, I know that the largest component of G is very small. This implies that the largest component of H is just as small. On the other hand, if c>1, I know that the chance of a path in G from v to y is very high; and it follows automatically that the chance of a path in H from v to y is the same. If almost all of G is connected, then almost all of H is connected. So, in short, if I assume equation (5), then the threshold at c=1 is there for dihypergraphs as well. Similarly, corresponding to the threshold function (4), I obtain for dihypergraphs
p = (logn/nt-1)[1 + (d-1)loglogn/logn + wn/logn] (6)
Not all questions about dihypergraphs can be resolved by reference to induced digraphs. However, the specific questions with which I am concerned here may be treated quite nicely in this manner. Below c=1, autopoietic magician systems of a reasonable size are quite unlikely. As c grows past 1, they get more and more likely. There is a critical point, past whichsignificant autocatalysis "suddenly" become almost inevitable. Then, as p passes c/nt-1 and logn/nt-1, connectivity gets thicker and thicker, until every vertex connects to arbitrarily many others.
Practical Implications of the Threshold
There is no reason to assume that the formula p = c/nt-1 is adhered to by real magician systems. Consider, for instance, a system of proteins, some of which are enzymes, able to catalyze the relations between other enzymes (see e.g. Bagley et al, 1992 and references therein). This sort of enzyme system is naturally modeled as a magician system. What determines p in an enzyme system is the recognitive ability of the various enzymes. The parameter p represents the percentage of other enzymes in the system that a given enzyme can "latch onto."
What happens in reality is that p increases with n. For instance, if the average recognitive ability of enzymes remains fixed, and one adds more enzymes, then the chance of a reaction involving a particular enzyme will increase. But as one adds more and more enzymes, and p progressively increases, eventually p will reach a point where equation (5) is approximately valid. At this point, the network will just barely be connected. A few enzymes fewer, and virtually no pairs of enzymes will interact. A few enzymes more, and the system will be a chaotic hotbed of activity.
Or, on the other hand, suppose one holds n constant. Then one has a similar "optimal range" for p. In a system of fixed size, if the connection probability p is too small, then nothing will be connected to anything else. But if p is too large, then everything will connect to many other things.
Connectivity and Dynamics
What is the effect of the connectivity threshold on random magician dynamics? It is clear that sub-threshold conditions are not suitable for the origin of large autopoietic magician systems. A disconnected graph does not support complex dynamics. What I suggest is that, for many real-world dynamics, too much connectivity is also undesirable. This implies that useful dynamics tend to require a near-threshold balance of p and n.
Why would too much connectivity be bad? Well, consider: if the average degree is 1, then a change in one component will on average directly affect only one other component. The change will spread slowly. If the average degree is less than one, a change in one component will spread so slowly as to peter out before reaching a significant portion of the network. But if the average degree exceeds one, then a change in one component will spread exponentially throughout the whole network -- and will also spread back to the component which originally changed, changing it some more.
Of course, this is only an heuristic argument. In some instances this sort of circular change-propagation is survivable, even useful. But even in these situations, I suggest, there issome maximal useful average degree, beyond which the "filtering" properties of the dynamics will cease to function, and nonproductive chaos will set in. This, then, is our key biological prediction:
Hypothesis. A typical magician system has a maximal useful average degree which is much less than n.
Given this, and assuming that a "typical" magician system is constructed in a roughly "quasirandom" way (Chung and Graham, 1990), then formula (4) above implies that the threshold for magician systems is not too far off from the simple threshold at p = c/nt-1.
The Origin of Life, and the Edge of Chaos
To make these ideas more concrete, it is perhaps worth noting that they have an intriguing, if speculative, connection to the question of the origin of life. Oparin's classic theory (1965; see also Dyson, 1982) suggests that life initially formed by the isolation of biomolecules inside pre-formed inorganic solid barriers such as water droplets. These inorganic "cell walls" provided the opportunity for the development of metabolism, without which the construction of organic cell walls would have been impossible. They provided part of the "scaffolding" on which life was built.
The details of this process are well worth reflecting on. All sorts of different-sized collections of biomolecules could have become isolated inside water droplets. But only when the right number were isolated together did interesting dynamics occur, with the possibility of leading to metabolic structure. This lends a whole new flavor to the Oparin theory. At very least, it should serve as a warning to anyone wishing to calculate the probability of metabolism forming by the Oparin scheme. One cannot talk about metabolic networks without taking the possibility of threshold phenomena into account.
This way of thinking is very closely related to the idea of the "edge of chaos," reported independently by Norman Packard and Chris Langton and discussed extensively in Lewin's book Complexity, among other places. The idea is that most complex systems operate in a state where their crucial control parameters are poised between values that would lead to dull, repetitive order, and values that would lead to chaos. Packard and Langton found this to be true in simulations of cellular automata. Stuart Kauffmann (1993) discovered the same phenomenon in his experiments with random Boolean networks: he found, again and again, a strict connectivity threshold between dull static or periodic dynamics and formless chaotic dynamics.
One cannot rigorously derive the results of Packard, Langton and Kauffmann from random graph theory, at least not in any way that is presently apparent. But the relation between their results and the graph-theoretic ideas of this section are too obvious to be ignored. If there is ever to be a precise theoryof the edge of chaos, one feels that it will have to have something to do with random graphs.
7.4 HYPERCOMPLEX NUMBERS AND MAGICIAN SYSTEMS
Having connected magician systems with graph theory, we will now turn to a different branch of mathematics -- abstract algebra. As it turns out, certain types of magician systems have a very natural representation in terms of the algebra of hypercomplex numbers.
Hypercomplex numbers are obtained by defining a vector multiplication table on ordinary d-dimensional space (Rd). The vectors form an algebra G under multiplication and a vector space under addition; the multiplication and addition interact in such a way as to form a ring. Each coordinate represents a basic system component, and thus each vector represents a "population" of components. A vector coupled with a graph of interconnections constitutes a complete model of a system. The vector multiplication signifies the intercreation of components; i.e., A * B = C is interpreted to mean that component A, when it acts upon component B, produces component C.
In order to express magician systems in terms of hypercomplex numbers, we will at first consider the simplest case of magician systems: deterministic magicians, who are living on a complete graph, so that every magician is free to act on every other magician. In this case, when formulated in the language of hypercomplex numbers, magician dynamics can often be reduced to a simple quadratic iteration, zk+1 = zk2, where the zi are vectors in Rd, and the power two is interpreted in terms of the G multiplication table. If one adds a constant external environment, one obtains the equation zk+1 = zk2 + c, familiar in dynamical systems theory for the case where n=2 and the multiplication table is defined so that instead of hypercomplex numbers, one has the complex number system.
One of the most striking implications of this approach is that the the hypercomplex equivalents of Julia sets and Mandelbrot sets (Devaney, 1988) play a fundamental role in autopoietic system dynamics. This follows directly from the appearance of the equation zk+1 = zk2 + c, noted above. Specifically, the Julia set of a system, in a certain environment, contains those initial system states which lead to relatively stable (i.e. bounded) behavior. The Mandelbrot set contains those environments which lead to bounded behavior for a large contiguous region of initial system states.
The theory of Julia and Mandelbrot sets may need to be substantially generalized in order to apply to hypercomplex number systems (let alone to more general algebras), but the basic point remains: the dynamical properties which Julia sets and Mandelbrot sets identify in the complex plane, when translated into an hypercomplex setting, are precisely the simplest of the many properties of interest to autopoietic system theorists. For the first question that one asks of anautopoietic system is: Is it viable? Does it successfully sustain itself? Does it work? Julia sets and Mandelbrot sets address this question; they speak of system viability. Other system properties may be studied by looking at appropriately defined subsets of Julia sets.
So far as I have been able to determine, no serious mathematical work has been done on regarding hypercomplex Julia sets. However, several researchers have explored these sets for artistic purposes. Alan Norton produced videos of quaternionic quadratic maps, entitled Dynamics of ei theta x(1-x) and A Close Encounter in the Fourth Dimension; the background music in these videos, composed by Jeff Pressing, was constructed from the same equations and parameter values (Pressing, 1988). More recently, Stuart Ramsden has produced a striking video of the Julia sets ensuing from the iteration z2 + c over the real quaternion algebra, using a new graphical method called "tendril tracing" (Ramsden, 1994). This kind of graphical exploration becomes more and more difficult as the dimensionality of the space increases; for after all, one is projecting an arbitrarily large number of dimensions onto the plane. But nevertheless, this sort of work is worthy of a great deal of attention, because what it is charting is nothing less than the dynamics of autopoietic systems.
Finally, I must be absolutely clear regarding what is not done in this chapter. I will not solve any of the difficult and interesting mathematical problems opened up by the concept of "hypercomplex fractals"; nor will I give serious practical applications of this approach to complex system modeling. The purpose of this chapter is quite different: it is simply to call attention to a new approach which appears to hold a great deal of promise. The hope is to inspire mathematical work on quadratics on hypercomplex number systems and related algebras, and practical work formulating system structure in terms of hypercomplex numbers and related algebras. In the final section I will propose a series of conjectures regarding complex systems and their algebraic formulations. The resolution of these conjectures, I believe, would be a very large step toward the construction of a genuine complexity science.
Magician Systems as Abstract Algebras
Having made appropriate restrictive assumptions, it is perfectly easy to express magician systems as abstract algebras. Let us begin with the case in which there is only one algebraic operation, call it "+". In this case, it is very natural to declare that the algebraic inverse of element A is the antimagician corresponding to A. And, once this is done, one would like the annihilation of magicians and antimagicians to be expressed by the identity A + -A = 0. The element "0" must be thus be understood as an "impotent magician," which is unable to annihilate anything because it is already null.
But if we are using the operation + to denote annihilation, then we need another operation to indicate the action ofmagicians upon one another. Let us call this operation "*," and let A * B refer to the product resulting when magician A casts its spell upon magician B. Thus a magician can act upon its own opposite without annihilating it, since -A * A need not equal 0.
This notion of + and * naturally leads one to view magician systems as linear spaces constructed over algebras. One need only assume that, where M denotes the magician system in question, {M,+,*} forms a ring. For instance, suppose one has a magician system consisting of 3 copies of magician A, 2 copies of magician B, and 4 copies of magician C. Then this system is nicely represented by the expression
3A + 2B + 4C
The result of the system acting on itself is then given by the expression
(3A + 2B + 4C) * (3A + 2B + 4C).
For what the distributive law says is that each element of the first multiplicand will be paired exactly once with each element of the second multiplicand. The production of, for instance, 12 A * C's from the pairing of the first 3A term with the last 4C term makes perfect sense, because each of the three A magicians gets to act on each of the four C magicians.
The annihilation of magicians and antimagicians is taken care of automatically, as a part of the purely mechanical step by which multiple occurences of the same magician are lumped together into a single term. For instance, consider
(A + B) * (A + B) = A*A + A*B + B*A + B*B
and suppose the magician interactions are such that
A*A = -B
A*B = B*A = B*B = B
Then the result of the iteration will be
(A + B) * (A + B) = -B + B + B + B = 2B
The single antimagician -B annihilates a single magician B just as the magician dynamic indicates, while the other two B's are combined into a single term 2B. Whereas the annihilation serves a fundamental conceptual function relative to the magician system model, the combination of like terms does not; yet both are accomplished at the same time.
As observed above, it is most natural to assume that -A * B = -(A * B). This means one has a genuine hypercomplex number ring; and furthermore it makes perfect intuitive sense. It means that the antimagician for A not only annihilates A, but in every situation acts in such a way as to produce precisely the antimagician of the magician which A would have produced in that situation. In this way it annihilates the effects of A as well as A itself, making a clean sweep of its annihilation by makingthe system just as it would have been had A never existed in the first place.
More generally, if the initial state of a magician system is represented by the linear combination z0 = c1M1 + ... + cNMN, where the ci are any real numbers representing "concentrations" of the various magicians, then the subsequent states may be obtained from the simple iteration
zn = zn-12 (7)
And if one has a magician system with a constant external input, one gets the slightly more complex equation
zn = zn-12 + c (8)
where c is a magician system which does not change over time.
Julia Sets
As observed in the Introduction, the latter equation suggests a close relationship between magician systems and Julia sets. For, suppose one has a system of five magicians called 0, 1, i, -1 and -i, with a commutative multiplication obeying i * i = -1. This particular magician system, when used as the algebra G for an hypercomplex number space, yields the complex numbers; and the equation for an externally-driven magician system is then just the standard quadratic iteration in the complex plane. The Julia set corresponding to a given "environment" value c contains those initial magician system states which do not lead to a situation in which some magician is copied infinitely many times; and it also contains all values c which are limit points of stable environment values of this type. The boundary of the Julia set thus demarcates the viable realm of initial system states from the non-viable realm (note that what I call the Julia set, some authors call the "filled Julia set," reserving the term "Julia set" for the boundary of what I call a Julia set). The Mandelbrot set, on the other hand, is the collection of environments c for which the Julia set is connected, rather than infinitely disconnected (it is known that these are the only two possibilities; see Devaney, 1988).
How do these concepts generalize to other instances of hypercomplex numbers? So far as I know, the answer to this question is at present unknown. The most basic results, such as the non-emptiness of the Julia set of a polynomial mapping, all depend on the fact that a polynomial in the complex number algebra has only finitely many roots. This is not true in an arbitrary hypercomplex number system, and thus the mathematical theory for the more general case can be expected to be very different. So far mathematicians have not focused their attention on these questions; but this is bound to change.
Incorporating Space or Stochasticity
The above equations represent only the simplest kind of magician system: every magician is allowed to cast its spell on every other magician at every time step. The possibility of a spatial graph of interconnection is ignored. If one introduces space then things become much more awkward Where zn-1 = c1M1 + ... + cdMd, the iteration becomes
(9)
where the pij are appropriately defined integer constants. The simple case given above is then retrieved from the case where the pij are equal to appropriate binomial coefficients. While not as elegant as the equations for the non-normalized case, these more general iterations are still relatively simple, and are plausible objects for mathematical study.
The same approach works for studying stochastic dynamics, provided that one is willing to ignore genetic drift and use an "iterated mean path" approximation. Only in this case the interpretation is that each constant ci is the probability that an arbitrarily selected element of the population is magician Mi. Thus the pij are not integers, they are determined by the expected value equation, and they work out so that the sum of the ci remains always equal to 1.
Single and Multiple Action
We have not yet introduced single action, whereby a magician A, acting all by itself, creates some magician B. But this can be dealt with on a formal level, by introducing a dummy magician called, say, R, with the property that R*R = R, so that R will always perpetuate itself. To say that A creates B one may then say that R*A = B.
Is there a similar recourse for the case of triple interactions? As a matter of fact there is; but it involves two steps. Say one wants to express the fact that A, B and E combine to form C -- something that might, in a more explicit notation, be called
*3(A,B,E) = C (10)
The way to do this is to introduce a dummy magician called R, so that A * B = R and E * R = C. This reduction goes back to Charles S. Peirce (1935) and his proof that Thirds, or triadic relations, are sufficient to generate all higher-order relations.
The trouble with this reduction is that it takes two time steps instead of one. Somehow one must guarantee that R survives long enough to combine with E to produce C. The easiest thing is to have R produce itself. To guarantee that R does notpersist to influence future reactions, however, one should also have C produce -R. Thus R will survive as long as it is needed, and no longer.
By this sort of mechanism all triple, quadruple and higher interactions can be ultimately reduced to sequences of paired interactions. In practice, it may sometimes be more convenient to explicitly allow operations *i which combine i magicians to form an output. But these operations may be understood as a "shorthand code" for certain sequences of pairwise operations; they do not need to enter into the fundamental theory.
So far we have two algebraic operations: + meaning cancellation and formal summation, and * meaning action. Two is a convenient number of algebraic operations to have: nearly all of abstract algebra has to do with systems containing one or two operations. Unfortunately, however, in order to give a complete algebraic treatment of autopoietic systems, it is necessary to introduce a third operation. The reason is the phenomenon of gestalt. In the context of pattern recognition systems, for example, gestalt expresses itself as emergent pattern. In many cases, a pattern A will emerge only when the two entities B and C are considered together, and not when any one of the two is considered on its own.
Our algebra (M,+,*) gives us no direct way of expressing the statement "A is a pattern which emerges between B and C." There is no way to express the word and as it occurs in this statement, using only the concepts of cancellation/summation and action. Thus it becomes necessary to introduce a third operation #, with the interpretation that A#B is the entity formed by "joining" A and B together into a single larger entity. The operation # can unproblematically be assumed to have an identity -- the zero process, which combines with A to form simply A.
In this view, to say that A is an emergent pattern between B and C, is to say that A is a pattern in B#C. If D is the process which recognizes this pattern, then we have
D * B#C = A (11)
where it is assumed that # takes precedence over * in the order of operations.
To get a handle on the operation, it helps to think about a simple example. Suppose the magicians in question are represented as binary sequences. The most natural interpretation of # is then as a juxtaposition operator, so that, e.g.,
001111 # 0101 = 0011110101
This example makes the meaning of inversion rather obvious. The expression B-1A refers to the result of the following operation: 1) if A has an initial segment which is identical to B, removing this segment; 2) if A does not have an initialsegment which is identical to B, leaving A as is. Thus inverses under # are unproblematic if properly defined.
It is also clear that # behaves distributively with respect to addition:
C#(A+B) = C#A + C#B (12)
(A+B)#C = A#C + B#C
Thus M(+,#) would be a ring, if one mades the somewhat unnatural assumption that 0 # A = 0 (here 0 is the zero of the additive group; it is the "empty magician system."). This equation, however, says that the result of juxtaposing with the empty magician system is the empty magician system. Unfortunately, it would seem much more logical to set
0#A = A (13)
thus equating the multiplicative and additive identities and sacrificing the ring structure; but on the other hand it is hard to see how the definition 0#A=0 could do any real harm.
The relation between # and * is even more difficult. Juxtaposition is clearly a noncommutative operation, since
0101 # 001111 = 0101001111 ;
so it would seems that in general # must be considered noncommutative. But with a little ingenuity, a kind of weak "one-sided commutativity" may be salvaged, at least in the important special case (discussed above) in which the operation * is an operation of pattern recognition. For, consider: the operation which takes A#B into B#A has a constant and in fact very small algorithmic information; and so, in the limit of very long sequences, the structure of A#B and the structure of B#A will be essentially the same. This means that
C * (A#B) = C * (B#A) (14)
for all C; and in fact even for fairly short sequences the two sides of the equation will be very close to equal.
On the other hand, under the straightforward juxtaposition interpretation the reverse is not true, i.e.
(A#B) * C not = (B#A) * C (15)
because the action of the juxtaposition A#B may depend quite sensitively on order. If A*B is defined as the output of some Turing machine M given program A and data B, then clearly equality is not even approximately valid, since A#B and B#A need not be similar as programs.
Next, and most crucially, what about distributivity with respect to *? In the juxtaposition interpretation this plainly does not hold. But by modifying the juxtaposition interpretation slightly and inoffensively, one may salvage the rule. Suppose that, when juxtaposing two sequences, one places a marker betweenthem, as in
001111 # 0101 = 001111|0101
This will not affect the patterns in the sequence significantly. Then, suppose that one changes the computational interpretation of the sequences appropriately, so that a "|" marker indicates to the Turing machine that it should run two separate programs, one consisting of the part of the sequence before the |, the other consisting of the part of the sequence after the |, and that when it has finished running the two programs it should juxtapose the results. If one adopts this interpretation then one finds that, pleasantly enough,
(A#B) * C = A*C # B*C (15)
The reverse kind of distributivity is false; as a rule
A * (B#C) and A*B # A*C are not equal. However, in the case of pattern recognition processes, the equality
A * (B#C) ?= A*B + A*C (16)
will hold a great deal of the time. It will hold in precisely those cases where there A detects no emergent pattern between B and C. The difference
A * (B#C) - A*B - A*C (17)
is the emergent pattern which A detects between B and C.
So we end up with a peculiar and possibly unique kind of algebraic structure. (M,+,*) is a ring; (M,+,#) is a not a ring due to the rule 0#A=A; and (M,#,*) is not a ring due to the lack of left-sided distributivity, but it is what is known as a near- ring (Pilz, 1977). This unsightly conglomeration, we suggest, is the algebra of autopoiesis, the algebra of complexity, the algebra of mind. Until this algebra is understood, complex systems will remain largely uncomprehended.
Incorporating emergence into the magician dynamic yields, in the simplest case, the following iteration:
zn = zn-1 * (zn-1 + zn-1#2) (18)
This case assumes that only emergences between pairs are being recognized. To incorporate emergences between triples yields
zn = zn-1 * (zn-1 + zn-1#2 + zn-1#3) (19)
and the most general case gives an infinite series formally represented by
zn = zn-1 * [ zn-1 # (1# - zn-1)#-1 ] (20)
(where 1# denotes the identity of the semigroup (M,#)).
What are the analogues of Julia sets and Mandelbrot sets forthese unorthodox iterations? The mathematics of today gives few clues. But these are the questions which we must answer if we want a genuine science of complex systems.
Finally, it is worth observing that this third operation # is not technically necessary. As with multiple interactions, however, the operation # may be expressed in terms of (M,+,*), if one is willing to avail oneself of unnatural formalistic inventions. All that is required here is a magician called, say, Jux, with the property that
*3(A,B,Jux) = A#B (21)
This reduces the product # to an *3 magician product, and hence, by our earlier reduction, to a series of ordinary magician products. Expressing it in this way conceals the algebraic properties of juxtaposition, but for situations in which these properties are not important, it may be more convenient to have only two operations to deal with.
This uncomfortable reduction reminds one of the limitations of hypercomplex numbers. Ideally, one would like to be able to deal with any algebra that happens to come up in the context of analyzing a complex system (this point is made very forcefully in (Andreka and Nemeti, 1990)). One would like to be able to deal with dynamics on (M,+,*,#) just as easily as on (M,+,*). Whether this will ever be the case, as the saying goes, "remains to be seen." In any event, I suggest that the hypercomplex numbers are an excellent place to begin. Generalizations to other algebras are important, but if hypercomplex numbers are too difficult, then more esoteric algebras would seem to be completely out of reach.
7.6 ALGEBRA, DYNAMICS AND COMPLEXITY
So, many complex systems can be naturally viewed as quadratics on hypercomplex numbers and related algebras. So what?
Formalism, in itself, confers no meaning or understanding. There is nothing inherently to be gained by formulating a complex system as an abstract algebra. But the idea is that, by doing mathematical analysis and computer simulation of iterations on algebras, one should be able to obtain practical insights into complex system behavior. At present this is more a vision than a reality. But my belief is that this research programme has an excellent chance of success.
In this concluding section, I will first seek to put the model of the previous section in perspective, by relating it to previous work on the algebraic modeling of complex systems. Then I will propose a series of crucial questions regarding the behavior of iterations on hypercomplex numbers. Finally, I will briefly discuss the possible evolutionary implications of the association between complex systems and fractals on abstract algebras.
Algebra, Dynamics and Complexity
I am not the first to seek a connection between complex systems and abstract algebra. In an article by H. Andreka and I. Nemeti, entitled "Importance of Universal Algebra for Computer Science," one finds the following statement:
Badly needed is a theory of complex systems as an independent but exact mathematical theory aimed at the production, handling and study of highly complex systems. This theory would disregard the system's origin, its nature etc., the only aspect it would concentrate on would be its being complex. ...
We conclude by taking a look at today's mathematics trying to estimate from which of its branches could a new mathematics "of quality" emerge forming the basis of this (not yet existing) culture of complex systems. (More or less this amounts to looking for the mathematics of general system theory (in the sense of Bertallanfy, Ashby and their followers)...).... It is universal algebra, category theory, algebraic logic and model theory which seem promising for our problems.
Not only do they dismiss the vast formalism of "complexity science" that has emerged within physics, chemistry and applied computer science; but, despite their central interest in understanding complex computer programs, they also dismiss the traditional theory of computer science. The answer to the problems of complexity science, or they claim, lies nowhere else but in abstract algebra and related areas of mathematical logic. Of the four branches of algebra and logic which they identify, universal algebra is the most prominent in their own work; and indeed, if interpreted sufficiently broadly, universal algebra can be understood to encompass a great deal of logic and model theory. For universal algebra is nothing more or less than the study of axiom systems and algebras in the abstract:
Abstract algebra ... contains many... axiom systems, e.g. group theory, lattice theory, etc. Universal algebra no more puts up with the study of finitely many fixed axiom systems. Instead, the axiom systems themselves form the subject of study.... Universal algebra investigates those regularities which hold when we try to describe an arbitrary phenomenon by some axioms we ourselves have chosen.
Now, this sounds very promising, but the trouble with this proposal is that, from the practical scientist or the computer programmer's point of view, universal algebra is a disappointingly empty branch of mathematics. There are very few useful, nontrivial theorems. The same can be said for category theory, model theory and algebraic logic. There are some neat technical results, but it is all too abstract to have much practical value. In short, even if one expressed a certain complex system in this kind of mathematical language, one would not be able to use this expression to draw meaningful conclusions about the ideosyncratic behavior of that particular system. Of course, it is impossible to foresee the future development of any branch of mathematics or science; it is possible that these fields of study (which have largely fallen out of favor, due precisely to this paucity of interesting results) will someday give rise to deep and applicable ideas. But until this happens, the programme of Andreka and Nemeti remains a dream.
Although the ideas of this chapter were developed before I had heard of the work of Andreka and Nemeti, in retrospect it would seem to complement their research programme quite nicely. There are, however, two major differences between my approach and theirs. First, I derive algebraic structures from general system-theoretic models, rather than from models of particular systems. And second, I lay a large stress on dynamical iterations on these algebras, rather than the algebraic structures themselves.
Regarding the first difference, although I do not claim that hypercomplex numbers are the only algebraic structures of any use to complexity science, I do claim that these rings are of fundamental system-theoretic importance. I have proposed alternative algebraic structures for modeling certain aspects of system behavior, and though I have shown that these alternative algebraic structures can be formally reduced to hypercomplex numbers, I have not demonstrated the practical utility of these reductions. Therefore I believe that, eventually, it will be necessary to develop a flexible theory of polynomial iterations on algebras. However, one must start somewhere, and given the general utility of the hypercomplex numbers for modeling magician systems, I believe that they form a very good starding point.
Next, regarding the second difference, it must not be forgotten that, if universal algebra suffers from a lack of examples, dynamical systems theory almost suffers from a surplus of examples. Drawing on the incomparable power of differential and integral calculus, it is by far the most operational approach to complexity science yet invented. The approach which I have taken here is intended to take advantage of the strengths of both approaches: to combine the structural freedom of abstract algebra with the operationality of dynamical systems theory. Thus, instead of embracing the utter generality advocated by Andreka and Nemeti, I restrict myself initially to the hypercomplex numbers, and propose to transfer some of the ideas and methods of dynamical systems theory to this context.
Despite its analytical power, however, as noted in Chapter One, the contemporary mathematical theory of dynamical systems is somewhat restricted in scope. First of all, nearly all of dynamical systems theory deals with deterministic rather than stochastic systems. And very little of the theory is applicable, in practice, to systems with a large number of variables. Much current research in dynamical systems theory deals with "toy iterations" -- very simple deterministic dynamical systems, often in one or two or three variables, which do not accurately modelany real situation of interest, but which are easily amenable to mathematical analysis. Implicit in the research programme of dynamical systems theory is the assumption that the methods used to study these toy iterations will someday be generalizable to more interesting iterations.
The approach of the present chapter, though falling into the general category of "dynamics," pushes in a different direction from the main body of dynamical systems theory. Instead of looking at more and more complicated iterations on low- dimensional real and complex spaces, I propose to look at simple iterations such as zn+1 = zn2 + c and its relatives, but on relatively high-dimensional spaces with unorthodox multiplications. The place of complexity science, I suggest, is here, halfway between the unbridled generality of universal algebra and the rigid analytic conformity of conventional dynamical systems theory.
In this section I will present a list of conjectures, the exploration of which I believe to be important for the development of the "complex systems and hypercomplex fractals" research programme. All of these conjectures basically get at the same point: the crucial thing to study is the nature of the multiple mappings by which systems give rise to algebras, algebras give rise to Julia sets, and Julia sets describe systems. (We will use the term "Julia set" loosely here, to refer not only to the Julia sets obtained from nice mappings like z2 + c, but also to the Julia-set-like entities obtained from the less tidy mappings also discussed above.)
Conjecture 1. Different types of complex systems give rise to different types of multiplication table.
It would seem that, by verifying this conjecture, a fair amount of insight could be obtained from hypercomplex numbers without doing hardly any mathematics or simulation. Do immunological systems have a characteristic type of multiplication table? What about psychological systems, ecological systems, economic systems? Do different personality types give rise to different multiplication tables? What about different economies? The reduction to hypercomplex numbers gives an interesting new method for comparing and classifying complex systems, related but not identical to standard differential and difference equation approaches. Palmer (1994) has proposed that more and more complex systems correspond to more and more complex algebras: quaternions, octonions, and so forth. Whether or not these details are correct, it seems plausible that some such correspondence holds.
Conjecture 2. Simpler multiplication tables give rise to simpler Julia sets.
Up to a certain point, this question could be approached in a purely intuitive way, by producing a large number of hypercomplex Julia sets and observing their complexity. Eventually, however, one wishes to have a formal understanding, and here one runs into the problem of defining "simplicity." One could take the approach of algorithmic information theory, defining the simplicity of a multiplication table as the length of the shortest program required to compute it, and the simplicity of a Julia set in terms of the lengths of the shortest programs required to compute its discrete approximations; but this approach makes the statement almost obvious, since the shortest program for computing a Julia set will probably be to run the quadratic iteration, and the bulk of the program required for computing the quadratic iteration will generally be taken up by the multiplication table. Instead, one wishes to measure the visual complexity of the Julia set; say, the length of the shortest program for computing the Julia set which is inferrable by some learning algorithm that knows nothing about quadratic iteration, say a learning algorithm that works by extracting repeated patterns (Bell, Cleary and Witten, 1990).
Conjecture 3. Projections of Julia sets be used to help "steer" complex system behavior.
Suppose that one had a flexible method for computing Julia sets of high-dimensional hypercomplex algebras in real time (of course, this will never be possible for arbitrary high- dimensional hypercomplex algebras, but as just observed above, it seems reasonable to expect that complex systems will lead to structured rather than arbitrary multiplication tables). Today's workstations may not be up to the task of rapidly simulating Julia sets over 75-dimensional hypercomplex algebras, but those of ten years hence may well be. Thus one can imagine, in not too distant future, using the following method to study a complex system. First, represent the system as a magician system. Then, locate the system's initial state within the Julia set of the system. Next, test out possible changes to the system by observing whether or not they move the state of the system out of the Julia set. In this scheme, an interactive movie of the projected Julia set would be much more than an attractive toy, it would be an indispensable tool for planning, for system steering. The beautiful complexity of the well known complex number Julia sets are a warning against the assumption that system viability follows simple, linear rules. System viability follows intricate fractal boundaries; and surely the same is true for subtler aspects of system behavior.
Conjecture 7. Similar systems give rise to similar Julia sets.
This is simply the question of how the structure of a Julia set depends on the structure of the algebra which generated it. I have already suggested that unstructured algebras give rise to unstructured Julia sets. This idea naturally leads to thehypothesis that algebras which share common structures will tend to give rise to Julia sets that share common structures. If this kind of "structural continuity" holds true it has a profound system-theoretic meaning: it says that patterns in system structure are closely tied with patterns in system viability.
It seems at least plausible that our intuitive ability to manage complex systems rests on an implicit awareness of this correspondence. We tend to assume that structurally similar systems will operate similarly, and in particular that they will be viable under similar circumstances. But, from the perspective of autopoietic algebra, this assumption is seen to depend on the peculiar relation between hypercomplex Julia sets and their underlying multiplication tables.
Conjecture 5. There are close relations between the Julia sets corresponding to different views of the same system.
To explore this conjecture one must first decide what one means by "different views." One natural approach is to use ring theory. Recall the definition of an ideal of a ring A: a subset U of A is an ideal of A if u*a and a*u are in U for every a in A. It is particularly interesting to apply this idea to the special case, discussed several times above, in which the multiplication * connotes both pattern recognition and action: then a collection of patterns U is an ideal if U contains all patterns recognized by processes in U, and all patterns recognizable in processes in U. In other words, in this case, an ideal is a closed pattern system.
One can find ideals by running the following iterative process until it stops: beginning with a set of patterns S0, let Si+1 denote all the patterns recognized by and in the pattern/processes in Si. An ideal U defines a ring of cosets A/U, whose elements are of the form U + a, in which U + a and U + b are considered equivalent if there are u and v in U so that u + a = v + b. Addition and multiplication on the coset ring are defined by the obvious rules (U+a) + (U+b) = U + (a+b), (U+a) * (U+b) = U + a*b. The ring A/U represents the magician system A as viewed from the subjective perspective of the subsystem U. The homomorphism theorems tell us that A/U is homomorphic to A, so that the "subjective world" of U is in a sense a faithful reflection of the "objective world" A. In other words, there is a map f from A to A/U with the property that f(ab) = f(a)f(b) and f(a+b) = f(a) + f(b); this map determines the "subjective" correlate f(a) = U + a of the "objective" entity a.
In this ring-theoretic perspective, our question is: what is the relation between the Julia set for a quadratic over A, and the Julia set for the same quadratic over A/U. What is the relation between the Julia set over A/U and the Julia set over A/V? How does the guaranteed homomorphism carry over into a geometrical correspondence between the different Julia sets? One might speculate that the Julia set for A/U would maintain the same overall structure as that for A, but would omit certain patterns and details. However, this is only a speculation, andthe matter is really not clear at all.
The emphasis on Julia sets in these conjectures is somewhat arbitrary. Julia sets have to do with system viability, but viability is only one property of a system. Just as interesting are more specific conjectures of system dynamics, i.e., conjectures regarding the individual trajectories which must be computed in order to get Julia sets. In fact, conjectures 2, 4 and 5 may be reformulated in terms of trajectories, to obtain new and perhaps equally interesting conjectures:
2'. Do simpler multiplication tables give rise to simpler average trajectories?
4'. Do similar systems tend to give rise to similar trajectories?
5'. Are there relations between the trajectories corresponding to different views of the same system?
Furthermore, the system steering tool proposed in Conjecture 3 could also be generalized to deal with system trajectories. Instead of just testing for viability, one could test the properties of the trajectories obtained by changing certain elements of the system.
And, just as the viability/instability dichotomy gives rise to Julia sets, so every other property of trajectories gives rise to its own characteristic set. These other sets are generally subsets of the Julia set; they are fractals within a fractal. For instance, suppose one wants to look for system states in which all magicians keep their numbers within certain specified bounds (this occurs, for example, with biological systems, in which a "homeostatic" state is defined as one in which the levels of all important substances remain within their natural range). Then one needs to compute the set consisting of all system states satisfying this property, rather than merely the property of viability; and system steering should be done by applying the method of Conjecture 3 to an appropriate subset of the Julia set, instead of the entire Julia set. And conjectures 2, 4 and 5 apply to these subsets of the Julia set as well as to the Julia set as a whole, thus giving rise to what might be labeled conjectures 2'', 4'' and 5''.
Having outlined what must be done in order to turn the sketchy model given above into a concrete, operational theory, I will now briefly turn to more speculative matters. The association of complex systems with hypercomplex fractals has a great number of interesting implications, but among all these, perhaps the most striking are the implications for complex system evolution. Hypercomplex fractal geometry has the potential to put some meat on the bones of the "post-Neo-Darwinist" theory of evolution.
In its most extreme form, the Neo-Darwinist view ofevolution assumes that an organism consists of a disjoint collection of traits. Evolution then gradually improves each trait, until each trait achieves a value which maximizes the "fitness criterion" posed by the environment. This is a very handy view which works well for many purposes; as argued in EM, however, it falls short on two fronts. First, it ignores ecology; that is, it ignores the substantial feedback between an organism and its environment. The environment of an organism is not independent of that organism, on the contrary, it is substantially shaped by that organism, and therefore the various fitness criteria of the various organisms in an ecosystem form a system of simultaneous nonlinear equations. And second, strict Neo-Darwinism ignores the complex self-organizing processes going on inside the organism. An organism is not a list of traits, it is an autopoietic system, and a slight change in one part of the system can substantially affect many different parts of the system. In short, strict Neo-Darwinism sets up a simple, mechanical organism against an inanimate environment, when what one really has is one complex autopoietic system, the organism, nested within another complex autopoietic system, the environment.
It is perfectly reasonable and direct to model both of these autopoietic systems -- organisms and environments -- as magician systems. On the one hand, organisms are chemical systems, and magician systems are substantially similar to Kampis's (1991) "component-systems," which were conceived as a model of biochemical reactions. And on the other hand, ecosystems are frequently modeled in terms of coupled ordinary differential equations, but any model expressed in this form can easily be reformulated as a magician system model. This is not the place to go into details, but the interested reader will find that the energetic transformations involved in a food web are easily expressed in terms of magician spells, as are such inter- organismic relations as parasitism and symbiosis.
So, suppose one has modeled organisms and environments as magician systems. Then one has two related questions:
1) How can one modify the internal makeup of an organism to as to improve its fitness relative to its environment, or, at very least, so as to change its behavior without drastically decreasing its fitness?
2) How can one modify the population structure of an environment without threatening ecological stability?
The answer to both of these questions, if the present theory is correct, is: first of all, by staying within the fractal boundary of an appropriate Julia set.
Changing the internal makeup of an organism means changing the population structure of the magician system defining that organism. Different population structures will lead to different organisms. This is vividly observable in cases where a slight change in the level of one chemical leads to a dramatic structural change; e.g. in the axolotl, which an increased thyroxin level grants the ability to breathe air. But if onechanges the organism's makeup in the wrong way, one will obtain a vector outside the relevant Julia set, and homeostasis will be destroyed; the level of some variable will shoot off to infinity. Effective evolution requires containment within the appropriate fractal boundary.
Similarly, a small change in the population of one organism can sometimes lead to ecological catastrophe. But in other situations, this is not true; whole species can be destroyed with few lasting effects on the remainder of the ecosystem. In the present view, the difference between these two cases reduces to a difference in proximity to the boundary of the relevant Julia set. When well into the interior of the Julia set, changes can be made without destroying viability. When near the boundary, however, changes must be made very delicately; many directions will maintain viability and many will not.
Organisms and ecosystems appear to be largely unpredictable. It is possible that much of this unpredictability reduces to the unpredictability of the fractal boundaries of Julia sets over abstract algebras. In Chapter 3 it was seen that, in the simple case of a two-dimensional quadratic, the genetic algorithm has the ability to generate diverse elements of the Julia set of a mapping. It is possible that biological evolution represents a similar process carried out in a space of much higher dimension. In this view, the challenge of evolving complex systems is that of generating diversity while staying within appropriate fractal boundaries.
A great deal of work remains to be done before this idea can be turned into a concrete theory, capable of being compared with empirical data. Intuitively and speculatively, however, the suggestions are clear. Casting aside strict Neo-Darwinism does not mean abandoning all structure and order in evolution. It means exchanging the rigid and artificial structure of trait lists and fixed fitness criteria for the more complex, intricate and natural structure of Julia sets. It means accepting that adaptive systems are really adaptive, autopoietic attractors -- attractors of internal dynamical processes which give rise to intricate fractal structures and which, in some cases, rival the evolutionary dynamic itself in complexity.