Chaotic Logic -- Copyright Plenum Press 1994

Back to Chaotic Logic Table of Contents

Chapter Ten


    The train of thought reported in this chapter began in the fall of 1991. My father was writing Turncoats and True Believers (Ted Goertzel, 1993), a book about political ideologies, those who abandon them, and those who maintain them; he was collecting anecdotes from a variety of biographies and autobiographies, and he was struck by the recurrent patterns. In some intuitively clear but hard-to-specify sense, ideologues of all different stripes seemed to think alike.

    My father has studied ideology for nearly a quarter century, and his approach is thoroughly rationalist: he believes that ideological belief systems coincide with irrational thought, whereas nonideological belief systems coincide with rational thought. This rationalism implies that adherents to nonideological belief systems should all think alike -- they are all following the same "correct" form of logical reasoning. But it says nothing about the nature of irrationality -- it does not explain why deviations from "correct" logical reasoning all seem to follow a few simple psychological forms.

    He hoped to resolve the puzzle by coming up with a "litmus test" for belief systems -- a property, or a list of properties, distinguishing irrational, ideological reasoning from rational thought. For example, two properties under tentative consideration for such a list were:

    1) adherents to ideological belief systems tend to rely on reasoning by analogy rather than logical deduction

    2) adherents to ideological belief systems tend to answer criticism by reference to "hallowed" texts, such as the Bible or das Kapital.

    But both of these properties were eventually rejected: the first because analogy is an essential part of logical deduction (as shown in Chapter Four); and thesecond because reference to hallowed texts is really a surface symptom, not a fundamental flaw in reasoning.

    Every property that he came up with was eventually discarded, for similar reasons. Eventually he decided that, given these serious conceptual troubles, Turncoats and True Believers would have to do without a formal theory of justification -- a decision that probably resulted in a much more entertaining book! The present chapter, however, came about as a result of my continued pursuit of an explanation of the difference between "rational" and "ideological" thought.

    I will not discuss political belief systems here -- that would take us too far afield from the cognitive questions that are the center of this book. However, the same questions that arise in the context of political belief systems, also emerge from more general psychological considerations. For I have argued that strict adherence to formal logic does not characterize sensible, rational thought -- first because formal logic can lead to rational absurdities; and second because useful applications of formal logic require the assistance of "wishy-washy" analogical methods. But if formal logic does not define rationality -- then what does?

    In this chapter I approach rationality using ideas drawn from evolutionary biology and immunology. Specifically, I suggest that old-fashioned rationalism is in some respects similar to Neo-Darwinism, the evolutionary theory which holds the "fitness" of an organism to be a property of the organism in itself. Today, more and more biologists are waking up to the sensitive environment-dependence of fitness, to the fact that the properties which make an organism fit may not even be present in the organism, but may be emergent between the organism and its environment. And similarly, I propose, the only way to understand reason is to turn the analogy-dependence of logic into a tool rather than an obstacle, and view rationality as a as a property of the relationship between a belief system and its "psychic environment."

    In order to work this idea out beyond the philosophical stage, one must turn to the dual network model. Productivity alone does not guarantee the survival of a belief system in the dual network. And unproductivity does not necessarily mitigate against the survival of a belief system. What then, I asked, doesdetermine survival in the complex environment that is the dual-network psyche?

    There are, I suggest, precisely two properties common to successful belief systems:

    1) being an attractor for the cognitive equation

    2) being productive, in the sense of creatively constructing new patterns in response to environmental demands

    A belief system cannot survive unless it meets both of these criteria. But some belief systems will rely more on (1) for their survival, and some will rely more on (2). Those which rely mainly on (1) tend to be monological and irrational; those which rely mainly on (2) are dialogical, rational and useful. This is a purely structural and systemic vision of rationality: it makes no reference to the specific contents of the belief systems involved, nor to their connection with the external, "real" world, but only to their relationship with the rest of the mind.

    In this chapter I will develop this approach to belief in more detail, using complex biological processes as a guide. First I will explore the systematic creativity inherent in belief systems, by analogy to the phenomenon of evolutionary innovation in ecosystems. Then, turning to the question of a belief system interacts with the rest of the mind, I will present the following crucial analogy: belief systems are to the mind as the immune system is to the body. In other words, belief systems protect the upper levels of the mind from dealing with trivial ideas. And, just like immune systems, they maintain themselves by a process of circular reinforcement.

    In addition to their intrinsic value, these close analogies between belief systems and biological systems are a powerful argument for the existence of nontrivial complex systems science. Circular reinforcement, self-organizing protection and evolutionary innovation are deep ideas with relevance transcending disciplinary bounds. The ideas of this chapter should provide new ammunition against those who would snidely assert that "there is no general systems theory."


    As suggested in the previous chapter, a complex belief system such as a scientific theory may be modeledas a self-generating structured transformation system. The hard core beliefs are the initials I, and the peripheral beliefs are the elements of D(I,T). The transformations T are the processes by which peripheral beliefs are generated from hard core beliefs. And all the elements of D(I,T) are "components," acting on one another according to the logic of self-generating component-systems.

    For example, in the belief systems of modern physics, many important beliefs may be expressed as equational models. There are certain situation-dependent rules by which basic equational models (Maxwell's Laws, Newton's Laws, the Schrodinger Equation) can be used to generate more complex and specific equational models. These rules are what a physicist needs to know but an engineer (who uses the models) or a mathematician (who develops the math used by the models) need not. The structuredness of this transformation system is what allows physicists to do their work: they can build a complex equational model out of simpler ones, and predict some things about the behavior of the complex one from their knowledge about the behavior of the simpler ones.

    On the other hand, is the conspiratorial belief system presented above not also a structured transformation system? Technically speaking, it fulfills all the requirements. Its hard core consists of one simple conspiracy theory, and its D(I,T) consists of beliefs about psychological and social structures and processes. Its T contains a variety of different methodologies for generating situated conspiracy beliefs -- in fact, as a self-generating component-system, its power of spontaneous invention can be rather impressive. And the system is structured, in the sense required by continuous compositionality: similar phenomena correspond to similar conspiracy theories. Yes, this belief system is an STS, though a relatively uninteresting one.

    In order to rule out cases such as this, one might add to the definition of STS a requirement stating that the set D(I,T) must meet some minimal standard of structural complexity. But there is no pressing need to do this; it is just as well to admit simplistic STS's, and call them simplistic. The important observation is that certain belief systems generate a high structural complexity from applying their transformation rules to one another and their initials -- just as written and spoken language systems generate a high structuralcomplexity from combining their words according to their grammatical words.

    And the meanings of the combinations formed by these productive belief systems may be determined, to a high degree of approximation, by the principle of continuous compositionality. As expressions are becoming complex, so are their meanings, but in an approximately predictable way. These productive belief systems respond to their environments by continually creating large quantities of new meaning.

    Above it was proposed that, in order to be productive, in order to survive, a belief system needs a generative hard core. A generative hard core is, I suggest, synonymous with a hard core that contains an effective set of "grammatical" transformation rules -- rules that take in the characteristics of a particular situation and put out expressions (involving hard core entities) which are tailored to those particular situations. In other words, the way the component-system which is a belief system works is that beliefs, using grammatical rules, act on other beliefs to produce new beliefs. Grammatical rules are the "middleman"; they are the part of the definition of f(g) whenever f and g are beliefs in the same belief system.

    And what does it mean for an expression E to be "tailored to" a situation s? Merely that E and s fit together, in the sense that they help give rise to significant emergent patterns in the set of pairs {(E,s)}. That a belief system has a generative hard core means that, interpreted as a language, it is complex in the sense introduced in the previous paragraph -- that it habitually creates significant quantities of meaning.

    The situatedness of language is largely responsible for its power. One sentence can mean a dozen different things in a dozen different contexts. Similarly, the situatedness of hard core "units" is responsible for the power of productive belief systems. One hard core expression can mean a dozen different things in a dozen different situations. And depending upon the particular situation, a given word, sentence or hard core expression, will give rise to different new expressions of possibly great complexity. To a degree, therefore, beliefs may be thought of as triggers. When flicked by external situations, these triggers release appropriate emergent patterns. The emergent patterns are not in the belief, nor are they in the situation; they are fundamentally a synergetic production.

10.1.1. Evolutionary Innovation

    To get a better view of the inherent creativity of belief systems, let us briefly turn to one of the central problems of modern theoretical biology: evolutionary innovation. How is it that the simple processes of mutation, reproduction and selection have been able to create such incredibly complex and elegant forms as the human eye?

    In The Evolving Mind two partial solutions to this problem are given. These are of interest here because, as I will show, the problem of evolutionary innovation has a close relation with the productivity of belief systems. This is yet another example of significant parallels among different complex systems.

    The first partial solution given in EM is the observation that sexual reproduction is a surprisingly efficient optimization tool. Sexual reproduction, unlike asexual reproduction, is more than just random stabbing out in the dark. It is systematic stabbing out in the dark.

    And the second partial solution is the phenomenon of structural instability. Structural instability means, for instance, that when one changes the genetic code of an organism slightly, this can cause disproportionately large changes in the appearance and behavior of the organism.

    Parallel to the biological question of evolutionary innovation is the psychological question of evolutionary innovation. How is it that the simple processes of pattern recognition, motor control and associative memory give rise to such incredibly complex and elegant forms as the Fundamental Theorem of Calculus, or the English language?

    One may construct a careful argument that the two resolutions of the biological problem of evolutionary innovation also apply to the psychological case. For example, it is shown that the multilevel (perceptual-motor) control hierarchy naturally gives rise to an abstract form of sexual reproduction. For, suppose process A has subsidiary processes W and X, and process B has subsidiaries X and Y. Suppose A judges W to work better than X, and reprograms W to work like X. Then, schematically speaking, one has

A(t) = A', W, X

B(t) = B', X, Y

A(t+1) = A', W, W

B(t+1) = B', W, Y

(where A' and B' represent those parts of A and B respectively that are not contained in W, X or Y). The new B, B(t+1), contains part of the old A and part of the old B -- it is related to the old A and B as a child is related to its parents. This sort of reasoning can be made formal by reference to the theory of genetic algorithms.

    Sexual reproduction is an important corollary of the behavior of multilevel control networks. Here, however, our main concern will be with structural instability. Let us begin with an example from A. Lima de Faria's masterful polemic, Evolution Without Selection (1988). As quoted in EM, Lima de Faria notes that the 'conquest of the land' by the vertebrates is achieved by a tenfold increase in thyroid hormone levels in the blood of a tadpole. This small molecule is responsible for the irreversible changes that oblige the animal to change from an aquatic to a terrestrial mode of life. The transformation involves the reabsorption of the tail, the change to a pulmonary respiration and other drastic modifications of the body interior.... If the thyroid gland is removed from a developing frog embryo, metamorphosis does not occur and the animal continues to grow, preserving the aquatic structures and functions of the tadpole. If the thyroid hormone is injected into such a giant tadpole it gets transformed into a frog with terrestrial characteristics....

    There are species of amphibians which represent a fixation of the transition stage between the aquatic and the terrestrial form. In them, the adult stage, characterized by reproduction, occurs when they still have a flat tail, respire by gills and live in water. One example is... the mud-puppy.... Another is... the Mexican axolotl.

    The demonstration that these species represent transitional physiological stages was obtained by administering the thyroid hormone to axolotls. Following this chemical signal their metamorphosis proceeded and they acquired terrestrial characteristics (round tail and aerial respiration). (p. 241)

This is a sort of paradigm case for the creation of new form by structural instability. The structures inherent in water-breathing animals, if changed only a little, become adequate for the breathing of air. And then, once a water-breathing animal comes to breathe air, it is of course prone to obtain a huge variety of other new characteristics. A small change in a small part of a complex network of processes, can lead to a large ultimate modification of the product of the processes.

    In general, consider any process that takes a certain "input" and transforms it into a certain "output." The process is structurally unstable if changing the process a little bit, or changing its input a little bit, can change the structure of (the set of patterns in) the output by a large amount. This property may also be captured formally: in the following section, the first innovation ratio is defined as the amount which changing the nature of the process changes the structure of the output, and the second innovation ratio is defined as the amount which changing the nature of the input changes the structure of the output.

    When dealing with structures generated by structurally unstable processes, it is easy to generate completely new forms -- one need merely "twiddle" the machinery a bit. Predicting what these new forms will be is, of course, another matter. The Innovation Ratios (*)

    Let y and y' be any two processes, let z and z' be any two entities, and let e.g. y*z denote the outcome of executing the process y on the entity z. For instance, in EM y and y' denote genetic codes, z and z' are sets of environmental stimuli, and y*z and y'*z' represent the organisms resultant from the genetic codes y and y' in the environments z and z'. Then the essential questions regarding the creation of new form are:

    1) what is the probability distribution of the "first innovation ratio"


That is: in general, when a process is changed by a certain amount, how much is the structure of the entities produced by the process changed? (d and d# denote appropriate metrics.)

    2) what is the probability distribution of the "second innovation ratio"


That is: when an entity is changed by a certain amount, how much is the structure of the entity which the process y transforms that entity into changed? For example, how much does the environment affect the structure of an organism?

    If these ratios were never large, then it would be essentially impossible for natural selection to give rise to new form.

    In EM it is conjectured that, where z and z' represent environments, y and y' genetic codes, and y*z and y'*z' organisms, natural selection can give rise to new form. This is not purely a mathematical conjecture. Suppose that for an arbitrary genetic code the innovation ratios had a small but non-negligible chance of being large. Then there may well be specific "clusters" of codes -- specific regions in process space -- for which the innovation ratio is acceptably likely to be large. If such clusters do exist, then, instead of a purely mathematical question, one has the biological question of whether real organisms reside in these clusters, and how they get there and stay there.

    The structural instability of a process y may be defined as the average, over all y', of d(S(y*z),S(y'*z))/d#(y,y') + d(S(y*z),S(y*z'))/d#(z,z') [i.e. of the sum of the first and second innovation ratios]. In a system which evolves at least partly by natural selection, the tendency to the creation of new form may be rephrased as the tendency to foster structurally unstable processes.

    Several mathematical examples of structurally unstable processes are discussed in EM. It has been convincingly demonstrated that one-dimensional cellular automata can display a high degree of structural instability. And it is well-known that nonlinear iterated function systems can be structurally unstable; this is the principle underlying the oft-displayed Mandelbrot set.

10.1.2. Structural Instability of Belief Systems

    Now, let us see how structural instability ties in with the concepts of monologicity and dialogicality. Onemay consider the hard core of a belief system as a collection of processes y1, y2,.... Given a relevant phenomenon z, one of the yi creates an explanation that may be denoted yi*z. If similar phenomena can have dissimilar explanations, i.e. if yi*z can vary a lot as z varies a little, then this means that the second innovation ratio is large; and it also fulfills half of the definition of dialogicality -- it says that the explanation varies with the phenomenon being explained.

    The other half of the definition of dialogicality is the principle of relevance -- it says that Em(yi*z,z) should be nontrivial; that the explanation should have something to do with the phenomenon being explained. Part of the difficulty with maintaining a productive belief system is the tension between creativity-promoting structural instability and the principle of relevance.

    And what does the first innovation ratio have to do with belief systems? To see this, one must delve a little deeper into the structure of belief systems. It is acceptable but coarse to refer to a belief system as a collection of processes, individually generating explanations. In reality a complex belief system always has a complex network structure.

    Many explanation-generating procedures come with a collection of subsidiary procedures, all related to each other. These subsidiaries "come with" the procedure in the sense that, when the procedure is given a phenomenon to deal with, it either selects or creates (or some combination of the two) a subsidiary procedure to deal with it. And in many cases the subsidiary procedures come with their own subsidiary procedures -- this hierarchy may go several levels down, thus providing a multilevel control network.

    So, in a slightly less coarse approximation to this dual network structure, let us say that each hard core process yi generates a collection of subprocesses yi1, yi2,.... For each i, let us consider the explanations of a fixed phenomenon z generated by one these subprocesses -- the collection

{yij*z, j=1,2,3,...}. The first innovation ratio [d(S(yij*z),S(y'*z))/d(#yij,y')] measures how much changing the subprocess yij changes the explanation which the subprocess generates. This is a measure of the ability of yi to come up with fundamentally new explanations by exploiting structural instability. It is thus a measure of the creativity or flexibility of the hard core of the belief system.

    Of course, if a belief system has many levels, the first innovation ratio has the same meaning on each level: it measures the flexibility of the processes on that level of the belief system. But considering creativity on many different levels has an interesting consequence. It leads one to ask of a given process, not only whether it is creative in generating subprocesses, but whether it generates subprocesses that are themselves creative. I suggest that successful belief systems have this property. Their component processes tend to be creative in generating creative subprocesses.

    This, I suggest, is one of the fundamental roles of belief systems in the dual network. Belief systems are structured transformation systems that serve to systematically create new pattern via multilevel structural instability.

    Earlier I explained how the linguistic nature of belief systems helps make it possible for them to generate complex explanations for novel situations. Linguistic structure allows one to determine the form of a combination of basic building blocks, based on the meaning which one wants that combination to have. Now I have also explained why linguistic structure is not enough: in order to be truly successful in the unpredictable world, a belief system must be systematically creative in its use of its linguistic structure.


    A belief system is a complex self-organizing system of processes. In this section I will introduce a crucial analogy between belief systems and a complex self-organizing physical system: the immune system. If this analogy has any meat to it whatsoever, it is a strong new piece of evidence in favor of the existence of a nontrivial complex systems science.

    Recall that the multilevel control network is roughly "pyramidal," in the sense that each processor is connected to more processes below it in the hierarchy than above it in the hierarchy. So, in order to achieve reasonably rapid mental action, not every input that comes into the lower levels can be passed along to the higher levels. Only the most important things should be passed further up.

    For example, when a complex action -- say, reading -- is being learned, it engages fairly high-level processes: consciousness, systematic deductive reasoning, analogical memory search, and so on. But eventually, once one has had a certain amount of practice, reading becomes "automatic" -- lower-level processes are programmed to do the job. Artful conjecture and sophisticated deduction are no longer required in order to decode the meaning of a sentence.

    An active belief about an entity s may be defined as a process in the multilevel control hierarchy that:

    1) includes a belief about s, and

    2) when it gets s as input, deals with s without either

a) doing recursive virtually-serial computation regarding s, or b) passing s up to a higher level.

    In other words, an active belief about s is a process containing a belief about s that tells the mind what to do about s in a reasonably expeditious way: it doesn't pass the buck to one of its "bosses" on a higher level, nor does it resort to slow, ineffective serial computation.

    This definition presupposes that individual "processes" in the dual network don't take a terribly long time to run -- a noncontroversial assumption if, as in Edelman's framework, mental processes are associated with clusters of cooperating neurons. Iterating single processes or sequences of processes may be arbitrarily time-consuming, but that's a different matter.

    All this motivates the following suggestive analogy: belief systems are to the mind as immune systems are to the body. This metaphor, I suggest, holds up fairly well not only on the level of purpose, but on the level of internal dynamics as well.

    The central purpose of the immune system is to protect the body against foreign invaders (antigens), by first identifying them and then destroying them. The purpose of a belief system, on the other hand, is to protect the upper levels and virtual serial capacity of the mind against problems, questions, inputs -- to keep as many situations as possible out of reach of the upper levels and away from virtual serial processing, by dealing with them according to lower-level active beliefs.

10.2.1. Immunodynamics

    Let us briefly review the principles of immunodynamics. The easy part of the immune system's task is the destruction of the antigen: this is done by big, dangerous cells well suited for their purpose. The trickier duties fall to smaller antibody cells: determining what should be destroyed, and grabbing onto the offending entities until the big guns can come in and destroy them. One way the immune system deals with this problem is to keep a large reserve of different antibody classes in store. Each antibody class matches (identifies) only a narrow class of antigens, but by maintaining a huge number of different classes the system can recognize a wide variety of antigens.

    But this strategy is not always sufficient. When new antigens enter the bloodstream, the immune system not only tries out its repertoire of antibody types, it creates new types and tests them against the antigen as well. The more antigen an antibody kills, the more the antibody reproduces -- and reproduction leads to mutation, so that newly created antibody types are likely to cluster around those old antibody types that have been the most successful.

    Burnet's (1976) theory of clonal selection likens the immune system to a population of asexually reproducing organisms evolving by natural selection. The fittest antibodies reproduce more, where "fitness" is defined in terms of match which antigen. But Jerne (1973) and others showed that this process of natural selection is actually part of a web of intricate self-organization. Each antibody is another antibody's antigen (or at least another "potential antibody"'s antigen), so that antibodies are not only attacking foreign bodies, they are attacking one another.

    This process is kept in check by the "threshold logic" of immune response: even if antibody type Ab1 matches antibody type Ab2, it will not attack Ab2 unless the population of Ab2 passes a certain critical level. When the population does pass this level, though, Ab1 conducts an all-out battle on Ab2. So, suppose an antigen which Ab2 recognizes comes onto the scene. Then Ab2 will multiply, due to its success at killing antigen. Its numbers will cross the critical level, and Ab1 will be activated. Ab1 will multiply, due to its success at killing Ab2 -- and then anything which matches Ab1 will be activated.

    The process may go in a circle -- for instance, if Ab0 matches Ab1, whereas Ab2 matches Ab0. Then one mightpotentially have a "positive feedback" situation, where the three classes mutually stimulate one another. In this situation a number of different things can happen: any one of the classes can be wiped out, or the three can settle down to a sub-threshold state.

    This threshold logic suggests that, in the absence of external stimuli, the immune system might rest in total equilibrium, nothing attacking anything else. However, the computer simulations of Alan Perelson and his colleagues at Los Alamos (Perelson 1989, 1990; deBoer and Perelson, 1990) suggest that in fact this equilibrium is only partial -- that in normal conditions there is a large "frozen component" of temporarily inactive antibody classes, surrounded by a fluctuating sea of interattacking antibody classes.

    Finally, it is worth briefly remarking on the relation between network dynamics and immune memory. The immune system has a very long memory -- that is why, ten years after getting a measles vaccine, one still won't get measles. This impressive memory is carried out partly by long-lived "memory B-cells" and partly by internal images. The latter process is what interests us here. Suppose one introduces Ag = 1,2,3,4,5 into the bloodstream, thus provoking proliferation of

Ab1 = -1,-2,-3,-4,-5. Then, after Ag is wiped out, a lot of Ab1 will still remain. The inherent learning power of the immune system may then result in the creation and proliferation of Ab2 = 1,2,3,4,5. For instance, suppose that in the past there was a fairly large population of Ab3 = 1,1,1,4,5. Then many of these

Ab3 may mutate into Ab2. Ab2 is an internal image of the antigen. It lacks the destructive power of the antigen, but it has a similar enough shape to take the antigen's place in the ideotypic network.

    Putting internal images together with immune networks leads easily to the conclusion that immune systems are structurally associative memories. For, suppose the antibody class Ab1 is somehow stimulated to proliferate. Then if Ab2 is approximately complementary to Ab1, Ab2 will also be stimulated. And then, if Ab3 is approximately complementary to Ab2, Ab3 will be stimulated -- but Ab3, being complementary to Ab2, will then be similar to Ab1. To see the value of this, suppose

Ag = 5,0,0,0,5

Ab1 = -5,0,0,0,-5

Ab2 = 5,0,0,-6,0

Ab3 = 0,-4,0,6,0

Then the sequence of events described above is quite plausible -- even though Ab3 itself will not be directly stimulated by Ag. The similarity between Ab3 and Ab1 refers to a different subsequence than the similarity between Ab1 and Ag. But proliferation of Ag nonetheless leads to proliferation of Ab3. This is the essence of analogical reasoning, of structurally associative memory. The immune system is following a chain of association not unlike the chains of free association that occur upon the analyst's couch. Here I have given a chain of length 3, but in theory these chains may be arbitrarily long. The computer simulations of Perelson and de Boer, and those of John Stewart and Francisco Varela (personal communication), suggest that the immune systems contains chains that are quite long indeed.

    One worthwhile question is: what good does this structurally associative capacity do for the immune system? A possible answer is given by the speculations of John Stewart and his colleagues at the Institute Pasteur (Stewart, 1992), to the effect that the immune system may serve as a general communication line between different body systems. I have mentioned the discovery of chains which, structurally, are analogous to chains of free association. Stewart's conjecture is that these chains serve the as communication links: one end of the chain connects to, say, a neurotransmitter, and the other end to a certain messenger from the endocrine system.

10.2.2. Belief Dynamics

    So, what does all this have to do with belief systems? The answer to this question comes in several parts.

    First of all, several researchers have argued that mental processes, just like antibodies, reproduce differentially based on fitness. As discussed above, Gerald Edelman's version of this idea is particularly attractive: he hypothesizes that types of neuronal clusters survive differentially based on fitness.

    Suppose one defines the fitness of a process P as the size of

Em(P,N1,...,Nk) - Em(N1,...,Nk), where the Ni are the "neighbors" of P in the dual network. And recall that the structurally associative memory is dynamic -- it iscontinually moving processes around, trying to find the "optimal" place for each one. From these two points it follows that the probability of a process not being moved by the structurally associative memory is roughly proportional to its fitness. For when something is in its proper place in the structurally associative memory, its emergence with its neighbors is generally high.

    This shows that, for mental processes, survival is in a sense proportional to fitness. In The Evolving Mind it is further hypothesized that fitness in the multilevel control network corresponds with survival: that a "supervisory" process has some power to reprogram its "subsidiary" processes, and that a subsidiary process may even have some small power to encourage change in its supervisor. Furthermore, it is suggested that successful mental processes can be replicated. The brain appears to have the ability to move complex procedures from one location to another (Blakeslee, 1991), so that even if one crudely associates ideas with regions of the brain this is a biologically plausible hypothesis.

    So, in some form, mental processes do obey "survival of the fittest." This is one similarity between immune systems and belief systems.

    Another parallel is the existence of an intricately structured network. Just as each antibody is some other antibody's antigen, each active belief is some other active belief's problem. Each active belief is continually putting questions to other mental processes -- looking a) to those on the level above it for guidance, b) to those on its own level as part of structurally associative memory search, and c) to those on lower levels for assistance with details. Any one of these questions has the potential of requiring high-level intervention. Each active belief is continually responding to "questions" posed by other active beliefs, thus creating a network of cybernetic activity.

    Recall that, in our metaphor, the analogy to the "level" of antigen or antibody population is, roughly, "level" in the multilevel control network (or use of virtual serial computation). So the analogue of threshold logic is that each active belief responds to a question only once that question has reached its level, or a level not too far below.

    As in the Ab1, Ab2, Ab3 cycle discussed above, beliefs can stimulate one another circularly. One can have, say, two active beliefs B1 and B2, which mutually support one another. An example of this was given alittle earlier, in the context of Jane's paranoid belief system: "conspiracy caused leg pain" and "conspiracy caused stomach pain."

    When two beliefs support one another, both are continually active -- each one is being used to support something. Thus, according to the "survival of the fittest" idea, each one will be replicated or at least reinforced, and perhaps passed up to a higher level. This phenomenon, which might be called internal conspiracy, is is a consequence of what in Chapter Eight was called structural conspiracy. Every attractor of the cognitive equation displays internal conspiracy. But the converse is not true; internal conspiracy does not imply structural conspiracy.

    Prominence in the dual network increases with intensity as a pattern (determined by the structurally associative memory), and with importance for achieving current goals (determined by the multilevel control network). Internal conspiracy is when prominence is achieved through illusion -- through the conspiratorially-generated mirage of intensity and importance.

10.2.3. Chaos in Belief Systems and Immune Systems

    Rob deBoer and Alan Perelson (1992) have shown mathematically that, even in an immune system consisting of two antibody types, chaos is possible. And experiments at the Institute Pasteur in Paris (Stewart, 1992) indicate the presence of chaotic fluctuations in the levels of certain antibody types in mice. These chaotic fluctuations are proof of an active immune network -- proof that the theoretical possibility of an interconnected immune system is physically realized.

    Suppose that some fixed fraction of antibody types participates in the richly interconnected network. Then these chaotic fluctuations ensure that, at any given time, a "pseudorandom" sample of this fraction of antibody types is active. Chaotic dynamics accentuates the Darwinian process of mutation, reproduction and selection, in the sense that it causes certain antibody types to "pseudorandomly" reproduce far more than would be necessary to deal with external antigenic stimulation. Then these excessively proliferating antibody types may mutate, and possibly connect with other antibody types, forming new chains.

    Of course, chaos in the narrow mathematical sense is not necessary for producing "pseudorandom" fluctuations -- complex periodic behavior would do as well, or aperiodic behavior which depends polynomially but not exponentially on initial conditions. But since we know mathematically that immune chaos is possible, and we have observed experimentally what looks like chaos, calling these fluctuations "chaos" is not exactly a leap of faith. Indeed, the very possibility of a role for immunological chaos is pregnant with psychological suggestions. What about chaos in the human memory network?

    Chaos in the immune network may, for example, be caused by two antibody types that partially match each other. The two continually battle it out, neither one truly prevailing; the concentration of each one rising and falling in an apparently random way. Does this process not occur in the psyche as well? Competing ideas, struggling against each other, neither one ever gaining ascendancy?

    To make the most of this idea, one must recall the basics of the dual network model. Specifically, consider the interactions between a set (say, a pair) of processes which reside on one of the lower levels of the perceptual-motor hierarchy. These processes themselves will not generally receive much attention from processes on higher levels -- this is implicit in the logic of multilevel control. But, by interacting with one another in a chaotic way, the prominences of these processes may on some occasions pseudorandomly become very large. Thus one has a mechanism by which pseudorandom samples of lower-level processes may put themselves forth for the attention of higher-level processes. And this mechanism is enforced, not by some overarching global program, but by natural self-organizing dynamics.

    This idea obviously needs to be refined. But even in this rough form, it has important implications for the psychology of attention. If one views consciousness as a process residing on the intermediate levels of the perceptual-motor hierarchy, then in chaos one has a potential mechanism for pseudorandom changes in the focus of attention. This ties in closely with the speculation of Terry Marks (1992) that psychological chaos is the root of much impulsive behavior.


    I have been talking about beliefs "attacking" one another. By this I have meant something rather indirect: one belief attacks another by giving the impression of being more efficient than it, and thus depriving it of the opportunity to be selected by higher-level processes. One way to think about this process is in terms of the "antimagician" systems of Chapter Seven.

    Also, I have said that belief systems may be viewed as component-systems, in which beliefs act on other beliefs to produce new beliefs. But I have not yet remarked that the process of beliefs destroying other beliefs may be conceived in the same way. When beliefs B and C are competing for the attention of the same higher-level process, then each time one "unit" of B is produced it may be said that one "unit" of anti-C is produced. In formal terms, this might be guaranteed by requiring that whenever f(g) = B, f(g,B) = C^. According to this rule, unless f and g vanish immediately after producing B, they will always produce one unit of anti-C for each unit of B.

    The relationship between C and C^ strengthens the immunological metaphor, for as I have shown each antibody class has an exactly complement. In the immune system, an antibody class and its complement may coexist, so long as neither one is stimulated to proliferate above the threshold level. If one of the two complements exceeds the threshold level, however, then the other one automatically does also. And the result of this is unpredictable -- perhaps periodic variation, perhaps disaster for one of the classes, or perhaps total chaos.

    Similarly, B and C may happily coexist in different parts of the hierarchical network of mind. The parts of the mind which know about B may not know about C, and vice versa. But then, if C comes to the attention of a higher-level process, news about C is spread around. The processes supervising B may consider giving C a chance instead. The result may be all-out war. The analogue here is not precise, since there is no clear "threshold" in psychodynamics. However, there are different levels of abstraction -- perhaps in some cases the jump from one of these levels to the next may serve as an isomorph of the immunological threshold.

    Anyhow, the immunological metaphor aside, it is clear that the concept of an "antimagician" has some psychological merit. Inherently, the dynamics of belief systems are productive and not destructive. It is the multilevel dynamics of the dual network which providesfor destruction. Space and time constraints dictate that some beliefs will push others out. And this fact may be conveniently modeled by supposing that beliefs which compete for the attention of a supervisory process are involved with creating "anti-magicians" for one another.

    Indeed, recalling the idea of "mixed-up computation" mentioned in Chapter Seven, this concept is seen to lead to an interesting view of the productive power of belief systems. Belief systems without antimagicians cannot compute universally unless their component beliefs are specifically configured to do so. But belief systems with antimagicians can compute universally even if the beliefs involved are very simple and have nothing to do with computation. It appears that, in this case, the discipline imposed by efficiency has a positive effect. It grants belief systems the automatic power of negation, and hence it opens up to them an easy path toward the production of arbitrary forms.

    For instance, consider the following simple collection of beliefs:

A: I believe it is not a duck

B: I believe it is a duck

C: I believe it walks like a duck

D: I believe it quacks like a duck

E: I believe it is a goose

The mind may well contain the following "belief generation equations":

F(F) = F

F(C,D) = B

B(B) = B

G(G) = G

G(E) = B^

The self-perpetuating process F encodes the rule "If it walks like a duck, and quacks like a duck, it should probably be classified as a duck." The self-perpetuating process B encodes the information that "it" is a duck, and that if it was classified as a duck yesterday, then barring further information it should still be a duck today. And, finally, the self-perpetuating process G says that, if in fact it should be found out that "it" is a goose, one should not classify it as a duck, irrespective of the fact that it walks like a duck andquacks like a duck (maybe it was a goose raised among ducks!).

    The entity F performs conjunction; the entity G performs negation. Despite the whimsical wording of our example, the general message should be clear. The same type of arrangement can model any system in which certain standard observations lead one to some "default" classification, but more specialized observations have the potential to overrule the default classification. The universal computation ability of antimagician systems may be rephrased in the following form: belief systems containing conjunctive default categorization, and having the potential to override default categorizations, are capable of computing anything whatsoever. Belief systems themselves may in their natural course of operation perform much of the computation required for mental process.


    Now, in this final section, I will turn once again to the analysis of concrete belief systems. In Chapter Eight I considered one example of intense internal conspiracy -- Jane's paranoid belief system. But this may have been slightly misleading, since Jane's belief system was in fact an explicit conspiracy theory. In this section I will consider a case of internal and structural conspiracy which has nothing to do with conspiracies in the external world: the belief system of Christianity.

    Christianity is a highly complex belief system, and I will not attempt to dissect it in detail. Instead I will focus on some very simple belief dynamics, centering around the following commonplace example of circular thought:

God exists because the Bible says so, and what the Bible says is true because it is the Revealed Word of God.

This "proof" of the existence of God is unlikely to convince the nonbeliever. But I was astonished, upon reading through a back issue of Informal Logic, to find an article attempting its defense.

    The author of the article, Gary Colwell (1989), reorganizes the argument as follows:

        (1) The Bible is the Revealed Word of God

        (2) The Bible says that God exists

        (3) God exists

His most interesting thesis is that, in certain cases, (1) is more plausible than (3). If one accepts this, it follows that demonstrating (3) from (1) is not at all absurd. Therefore, Colwell reasons, in practice the argument is not circular at all.

    I do not agree with Colwell's argument; in fact I find it mildly ridiculous. But by pursuing his train of thought to its logical conclusion, one may arrive at some interesting insights into the creativity, utility and self-perpetuating nature of the Christian belief system.

10.4.1. The Bible and Belief

    Let us review Colwell's case for the greater plausibility of (1), and pursue it a little further. I contend that, rather than removing the circularity of the argument, what Colwell has actually done is to identify part of the mechanism by which the circularity of the argument works in practice.

    Colwell's argument for the greater plausibility of (1) is as follows:

    It is not uncommon to hear of believers who relate their experience of having encountered God through the reading of the Bible. Prior to their divine encounter they often do not hold the proposition "God exists" as being true with anything approaching a probability of one half. Indeed, for some the prior probability of its being true would be equivalent to, or marginally greater than, zero. Then ... they begin to read the Bible. There in the reading, they say, they experience God speaking to them. It is not as though they read the words and then infer that God exists, though such an inference may be drawn subsequently. Rather, they claim that the significance of the words, the personal relevance of the words, and the divine source of the words are all experienced concomitantly. In reading the words they have the complex experience of being spoken to by God. The experienced presence of God is not divorced from their reading of the words....

    Given that this experience of encountering God in the reading of the Bible is a grounding experience for the believer, from which he may only later intellectually abstract that one element that he refers to by saying that God exists, proposition (1) for such a believer may actually be more plausible than proposition (3).

    Putting aside the question of how common this type of religious experience is, what is one to make of this argument?

    I think that Colwell is absolutely right. It probably is possible for a person to find (1) more plausible than (3). For a person who has had the appropriate religious experience, the argument may be quite sensible and noncircular.

    After all, when told that a young man has long hair, and asked to rate which of the following two sentences is more likely, what will most people say?

A: The young man is a bank teller

B: The young man is a bank teller and smokes marijuana

The majority of people will choose B. Numerous psychological experiments in different contexts show as much (for a review, see Holland et al (1975)). But of course, whenever B is true, A is also true, so there is no way B is more likely than A. The point is, intuitive judgements of probability or plausibility do not always obey the basic rules of Boolean logic. Even though (1) implies (3) (and in fact significantly implies (3) in the sense of Chapter Four), a person may believe that (1) is more likely than (3). Why not -- it is known that, even though B implies A, a person may believe B to be more likely than A.

    What this means, I believe, is that the human mind is two-faced about its use of the word "and." If asked, people will generally make a common-language statement equivalent to "'and' means Boolean conjunction." But when it comes down to making real-life judgements, the human mind often interprets "and" in a non-Boolean way: it thinks as if "A and B" could be true even though A were false. Thus, God exists and the Bible is the Revealed Word of God" is treated as if it could be true even though "God exists" were false. In judging the plausibility or likelihood of "A and B," the mind sometimes uses a roughly additive procedure, combiningthe likelihood of A with the likelihood of B, when on careful conscious reflection a multiplicative procedure would make more sense.

    But it seems to me that Colwell's argument contains the seeds of its own destruction. I grant him that in certain cases the inference from (1) to (3) may be reasonable -- i.e., given the a priori judgement of greater plausibility for (1). But nonetheless, the argument is still fundamentally circular. And I suspect that its circularity plays a role in the maintenance of religious belief systems.

    I have known more than one religious individual who, when experiencing temporary and partial doubt of the existence of God, consulted the Bible for reassurance -- in search of the kind of experience described by Colwell, or some less vivid relative of this experience. But on the other hand, the same people, when they came across passages in the Bible that made little or no intuitive sense to them, reasoned that this passage must be true because the Bible is the Revealed Word of God. Certain passages in the Bible are used to bolster belief in God's existence. But belief in the validity of the Bible -- when shaken by other passages from the Bible -- is bolstered by belief in God's existence. The two beliefs (1) and (3) support each other circularly. Considered in appropriate context, they may be seen to produce one another.

    This psychological pattern may lead to several different results. In some cases the intuitive unacceptability of certain aspects of the Bible may serve to weaken belief in God. That is, one might well reason:

(1) The Bible is the Revealed Word of God

(2') The Bible is, in parts, unreasonable or incorrect

(3') Thus God is capable of being unreasonable or incorrect     

And (3'), of course, violates the traditional Christian conception of God. This is one possible path to the loss of religious faith.

    On the other hand, one might also reason

(1'') God exists and is infallible

(2'') The Bible is, in parts, unreasonable or incorrect

(3'') The Bible is not the Revealed Word of God

This is also not an uncommon line of argument: many religious individuals accept that the Bible is an imperfect historical record, combining the Word of God with other features of human origin. For instance, not all Christians accept the Bible's estimate of the earth's age at 6000 years; and most Christians now accept the heliocentric theory of the solar system.

    Finally, more interestingly, there is also the possibility that -- given appropriate real-world circumstances -- these two circularly supported beliefs might lead to increased belief in God. We have agreed that it is possible to believe (1) more strongly than (3). So, for sake of argument, suppose that after a particularly powerful experience with the Bible, one assigns likelihood .5 to (1), and likelihood .1 to (3). Then, what will one think after one's experience is done, when one has time to mull it over? Following Colwell's logic, at this point one will likely reason that, if (1) has likelihood .5, then the likelihood of (3) cannot be as low as .1. Perhaps one will up one's estimate of the likelihood of (3) to .5 (the lowest value which it can assume and still be consistent with Boolean logic). But then, now that one believes fairly strongly in the existence of God, one will be much more likely to attend church, to speak with other religious people -- in short, to do things that will encourage one to have yet more intense experiences with the Bible. So then, given this encouragement, one may have a stronger experience with the Bible that causes one to raise one's belief in (1) to .8. And after pondering this experience over, one may raise one's belief in (3) to .8 -- and so forth. The circularity of support may, in conjuction with certain properties of the real world in which the believer lives, cause an actual increase in belief in both (1) and (3).

    So, whereas Colwell expresses "curiosity about the prominence that the putatively circular Biblical argument has received," I see no reason for curiosity in this regard. The Biblical argument in question really is circular, and it really does play a role in the maintenance of religious belief systems. The religious experience which he describes is indeed real, at least in a psychological sense -- but it does not detract from the circularity of the argument. Rather, it is connected with this circularity in a complex and interesting way.

10.4.2. Christianity as a Belief System

    Let us rephrase this discussion in terms of pattern. "God exists" is a certain way of explaining events in the world. It explains some events -- say, a child being hit by a car -- very poorly. But it explains other events fairly well. To give an extreme example, several college students have reported to me that they do better on their mathematics tests if they pray beforehand. This phenomenon is explained rather nicely by the belief that God exists and intervenes to help them. My own preferred explanation -- the placebo effect -- is much less simple and direct.

    Two related examples are the religious ecstasy some people experience in church, and the experience of "talking to God" -- either directly or, as discussed above, through the Bible. These subjective psychological phenomena are well explained by the hypothesis that God exists. Alternate explanations exist, but they are more complex; and the religious belief system is rather vigilant in sending out "antimagicians" against these alternatives.

    Believing that "the Bible is the Revealed Truth of God" explains a few other things, in addition to those phenomena explained by "God exists." And, more importantly, it gives the believer a set of rules by which to organize her life: the Ten Commandments, and much much more. These rules promote happiness, in the sense defined above: they provide order where otherwise there might be only uncertainty and chaos. They actually create pattern and structure. They are a very effective "psychological immune system" -- protecting valuable high-level processes from dealing with all sorts of difficult questions about the nature of life, morality and reality.

    So, one has an excellent example of internal conspiracy: belief in the Bible supports belief in God, and vice versa. And in very many cases this internal conspiracy is also a structural conspiracy: the two beliefs create one another. Belief in the Bible gives rise to belief in God, in an obvious way; and belief in the Christian God, coupled with a certain faith in the trappings of contemporary religion, gives rise to belief in the Bible. It is certainly possible to believe in the Christian God while doubting the veracity of the Bible; but in nearly all cases belief in the Christian God leads at least to belief in large portions of the Bible.

    This is a useful belief system, in that it really does deal with a lot of issues at low levels, savinghigher levels the trouble. It is psychologically very handy. For example, it mitigates against the mind becoming troubled with metaphysical questions such as the "meaning of life." And it does wonders to prevent preoccupation with the fear of death. It serves its immunological function well.

    Next, as anyone who has perused religious literature must be well aware, the Christian belief system is systematically creative in explaining away phenomena that would appear to contradict Biblical dogma. It is precisely becuase of this that arguing evolution or ethics with an intelligent Christian fundamentalist can be unsettling. Every argument receives a response which, although clever and appropriate in its own context, is nonetheless strange and unexpected.

    So, to a certain extent, the Christian belief system meets both the criteria for survival laid out at the beginning of the chapter. It is an attractor for the cognitive equation, a structural conspiracy, and it is creatively productive in the service of the dual network.

    However, the Christian belief system clearly does have its shortcomings. It entails a certain amount of awkward dissociation. For instance, the Bible implies that the Earth is only a few thousand years old, thus contradicting the well-established theory of evolution by natural selection. In order to maintain the Christian belief system, the mind must erect a "wall" between its religious belief in the Bible and its everyday belief in scientific ideas. This is precisely the sort of dissociation that leads to ineffective thinking: dissociation that serves to protect a belief from interaction with that which would necessarily destroy it.

    The prominence of this sort of dissociation, however, depends on the particular mind involved. Some people manage to balance a Christian belief system with a scientific world-view in an amazingly deft way. This is systematic creativity at work! For others, however, Christianity becomes stale and unproductive, separate from the flow of daily life and thought. The value of a belief system cannot be understood outside of the context of a specific believing mind. Just as a cactus is fit in the desert but unfit in the jungle, Christianity may be rational or irrational, depending on the psychic environment which surrounds it.