From Complexity to Creativity -- Copyright Plenum Press, © 1997

Back to " From Complexity to Creativity" Contents

Part III. Mathematical Structures in the Mind

CHAPTER 8. THE STRUCTURE OF CONSCIOUSNESS


CHAPTER EIGHT

THE STRUCTURE OF CONSCIOUSNESS

8.1 INTRODUCTION

Over the past few years, the writing of books on consciousness has become a minor intellectual industry. From philosophers to computer scientists, from psychologists to biologists to physicists, everyone seems to feel the need to publish their opinion on consciousness! Most of these works are reductionistic in focus: one complex mechanism after another is proposed as the essential "trick" that allows us to feel and experience ourselves and the world around us.

On the other hand, Nicholas Humphrey, in his Natural History of the Mind, has taken a somewhat different perspective. He argues that consciousness is something tremendously simple: that it is nothing more or less than raw feeling. Simply feeling something there. According to this view, consciousness exists beneath the level of analysis: it is almost too simple to understand. This view ties in nicely with Eastern mysticism, e.g. with the Hindu notion that at the core of the mind is the higher Self or atman, which is totally without form.

I like to think there is value in both of these approaches. Raw awareness is present in every state of consciousness, but different states of consciousness have different structures; and, possibly, there are some universals that tie together all different states of consciousness.

The mechanistically-oriented consciousness theorists are getting at the structure of states of consciousness. Some of these theorists, such as Daniel Dennett in his celebrated book Consciousness Explained, have made the error of assuming this structure to be the essence, of ignoring the subjective facts of experience. Other theorists, however, are looking at biological and computational mechanisms of consciousness structure in a clearer way, without confusing different levels of explanation. This is the category into which I would like to place myself, at least for the purposes of the present chapter. I am concerned here mainly with structures and mechanisms, but I do not confuse these structures and mechanisms with the essence of subjective experience.

Three Questions of Consciousness

In my view, there are three questions of consciousness. The first is, what is raw feeling, raw awareness? What are its qualities? The second is, what are the structural and dynamical properties of different states of consciousness? And the third is: how does raw feeling interface with the structural and dynamical properties of states of consciousness? Each of these is an extremely interesting psychological question in its own right.

I will focus here on the second question, the question of structure. This is where scientific and mathematical ideas have a great deal to offer. However, I do not consider it intellectually honest to entirely shy away from the other questions. Thus, before proceeding to describe various structures of consciousness, I will roughly indicate how I feel the three questions fit together. One may, of course, appreciate may analysis of the various structures of consciousness without agreeing with my views on the nature of raw awareness, on the first and third questions.

I prefer to answer the first question by not answering it. Raw consciousness is completely unanalyzable and inexpressible. As such, I believe, it is equivalent to the mathematically random. Careful scrutiny of the concept of randomness reveals its relative nature: "random" just means "has no discernible structure with respect to some particular observer." Raw awareness, it is argued, is random in this sense. It is beyond our mental categories; it has no analyzable structure.

As to the third question, the relation between raw consciousness and the structure of consciousness, I prefer to give an animist answer. As Nick Herbert affirms in Elemental Mind (1988), awareness, like energy, is there in everything. From this position, one does not have to give an explanation of how certain states of matter give rise to trans-material awareness, and others do not.

However, animism in itself does not explain how some entities may have more awareness than others. The resolution to this is, in my view, provided by the idea of "consciousness as randomness." Entities will be more aware, it seems, if they open themselves up more, by nature, to the incomprehensible, the ineffable, the random. This observation can, as it turns out, be used to explain why certain states of consciousness seem more acutely aware than others. But I will not attempt to push the theory of consciousness this far here; this topic will be saved for elswhere.

The Structure of Consciousness

So how are states of consciousness structured? The most vivid states of consciousness, I will argue here, are associated with those mental system that make other mental systems more coherent, more robustly autopoietic. These mental systems involve extreme openness to raw awareness. This view fits in naturally with all that is known about the neuropsychology of consciousness. Consciousness is a perceptual-cognitive loop, a feedback dynamic, that serves to coherentize, to make whole, systematic, definite.

Pushing this train of thought further, I will argue that, in particular, the most vivid states of consciousness are manifested as time-reversible magician systems. It is the property of reversibility, I believe, that allows these thought-systems their openness to the random force of raw awareness.

In David Bohm's language, to be introduced below, reversibility means that consciousness manifests "proprioception of thought." In terms of hypercomplex algebras, on the other hand, reversibility means that states of consciousness correspond to division algebras. This turns out to be a very restrictive statement, as the only reasonably symmetric finite-dimensional division algebras are the reals, the complexes, the quaternions and the octonions. The octonions, which contain the other three algebras as subalgebras, will be taken as the basic algebraic structure of consciousness. This abstract algebraic view of consciousness will be seen to correspond nicely with the phenomenology of consciousness. Octonionic algebras result from adjoining a reflective "inner eye" to the perceptual-cognitive loop that coherentizes objects.

As this summary should make clear, this chapter (even more than the rest of the book) is something of a potpourri of innovative and unusual ideas. No claim is made to solve the basic problem of consciousness -- which, insofar as it is a "problem," is certainly insoluble. Rather, the psynet model is used to make various interrelated forays into the realm of consciousness, centered on the question of the structure of conscious experience.

8.2 THE NEUROPSYCHOLOGY OF CONSCIOUSNESS

Before presenting any original ideas, it may be useful to review some relevant experimental work on the biology of consciousness. For decades neuropsychologists have shunned the word "consciousness," preferring the less controversial, more technical term "attention." But despite the methodological conservatism which this terminology reflects, there has been a great deal of excellent work on the neural foundations of conscious experience. In particular, two recent discoveries in the neuropsychology of attention stand out above all others. First is the discovery that, in Dennett's celebrated phrase, there is no "Cartesian Theater": conscious processes are distributed throughout the brain, not located in any single nexus. And next is the discovery of the basic role of this distributed consciousness: nothing esoteric or sophisticated, but simply grouping, forming wholes.

According to Rizzolati and Gallese (1988), there are two basic ways of approaching the problem of attentiveness. The first approach rests on two substantial claims:

1) that in the brain there is a selective attention center or circuit independent of sensory and motor circuits; and

2) that this circuit controls the brain as a whole.... (p. 240)

In its most basic, stripped-down form this first claim implies that there are some brain regions exclusively devoted to attention. But there are also more refined interpretations: "It may be argued ... that in various cerebral areas attentional neurons can be present, intermixed with others having sensory or motor functions. These attentional neurons may have connections among them and form in this way an attentional circuit" (p.241).

This view of attention alludes to what Dennett (1991) calls the "Cartesian Theater." It holds that there is some particular place at which all the information from the senses and the memory comes together into one coherent picture, and from which all commands to the motor centers ultimately emanate. Even if there is not a unique spatial location, there is at least a single unified system which acts as if it were all in one place.

Rizzolati and Gallassi contrast this with their own "premotor" theory of attention, of which they say:

First, it claims that ... attention is a vertical modular function present in several independent circuits and not a supramodal function controlling the whole brain. Second, it maintains that attention is a consequence of activation of premotor neurons, which in turn facilitates the sensory cells functionally related to them.

The second of these claims is somewhat controversial -- many would claim that the sensory rather than premotor neurons are fundamental in arousing attention. However, as Rizzolati and Gallassi point out, the evidence in favor of the first point is extremely convincing. For instance, Area 8 and inferior Area 2 have no reciprocal connections -- and even in their connections with the parietal lobe they are quite independent. But if one lesions either of these areas, severe attentional disorders can result, including total "neglect" of (failure to be aware of) some portion of the visual field.

There are some neurological phenomena which at first appear to contradict this "several independent circuits" theory of consciousness. But these apparent contradictions result from a failure to appreciate the self-organizing nature of brain function. For instance, as Rizzolatti et al (1981) have shown, although the neurons in inferior Area 6 are not responsive to emotional stimuli, nevertheless a lesion in this area can cause an animal to lose its ability to be aware of emotional stimuli. But this does not imply the existence of some brain-wide consciousness center. It can be better explained by positing an interdependence between Area 6 and some other areas responsive to the same environmental stimuli and also responsive to emotional stimuli. When one neural assembly changes, all assemblies that interact with it are prodded to change as well. Consciousness is part of the self-structuring process of the brain; it does not stand outside this process.

So consciousness is distributed rather than unified. But what does neuropsychology tell us about the role of consciousness? It tells us, to put it in a formula, that consciousness serves to group disparate features into coherent wholes. This conclusion has been reached by many different researchers working under many different theoretical presuppositions. There is no longer any reasonable doubt that, as Umilta (1988) has put it, "the formation of a given percept is dependent on a specific distribution of focal attention."

For instance, Treisman and Schmidt (1982) have argued for a two-stage theory of visual perception. First is the stage of elementary feature recognition, in which simple visual properties like color and shape are recognized by individual neural assemblies. Next is the stage of feature integration, in which consciousness focuses on a certain location and unifies the different features present at that location. If consciousness is not focused on a certain location, the features sensed there may combine on their own, leading to the perception of illusory objects.

This view ties in perfectly with what is known about the psychological consequences of various brain lesions. For instance, the phenomenon of hemineglect occurs primarily as a consequence of lesions to the right parietal lobe or left frontal lobe; it consists of a disinclination or inability to be aware of one or the other side of the body. Sometimes, however, these same lesions do not cause hemineglect, but rather delusional perceptions. Bisiach and Berti (1987) have explained this with the hypothesis that sometimes, when there is damage to those attentional processes connecting features with whole percepts in one side of the visual field, the function of these processes is taken over by other, non-attentional processes. These specific consciousness-circuits are replaced by unconscious circuits, but the unconscious circuits can't properly do the job of percept-construction; they just produce delusions. And this sort of phenomenon is not restricted to visual perception. Bisiach et al (1985) report a patient unable to perceive meaningful words spoken to him from the left side of his body -- though perfectly able to perceive nonsense spoken to him from the same place.

Psychological experiments have verified the same phenomenon. For instance, Kawabata (1986) has shown that one makes a choice between the two possible orientations of the Necker cube based on the specific point on which one first focuses one's attention. Whatever vertex is the focus of attention is perceived as in the front, and the interpretation of the whole image is constructed to match this assumption. Similar results have been found for a variety of different ambiguous figures -- e.g. Tsal and Kolbet (1985) used pictures that could be interpreted as either a duck or a rabbit, and pictures that could be seen as eithr a bird or a plane. In each case the point of conscious attention directed the perception of the whole. And, as is well known in such cases, once consciousness has finished forming the picture into a coherent perceived whole, this process is very difficult to undo.

Treisman and Schmidt's division of perception into two levels is perhaps a little more rigid than the available evidence suggests. For instance, experiments of Prinzmetal et al (1986) verify the necessity of consciousness for perceptual integration, but also point out some minor role for consciousness in enhancing the quality of perceived features. But there are many ways of explaining this kind of result. It may be that consciousness acts on more than one level: first in unifying sub-features into features, then in unifying features into whole objects. Or it may be that perception of the whole causes perception of the features to be improved, by a sort of feedback process.

8.3 THE PERCEPTUAL-COGNITIVE LOOP

In this section I will abstract the neuropsychological ideas discussed above into a more generalized, mathematical theory, which I call the theory of the Perceptual-Cognitive Loop. This is not a complete theory of the structure of consciousness -- it will be built on in later sections. But it is a start.

Edelman (1990) has proposed that consciousness consists of a feedback loop from the perceptual regions of the brain to the "higher" cognitive regions. In other words, consciousness is a process which cycles information from perception to cognition, to perception, to cognition, and so forth (in the process continually creating new information to be cycled around).

Taking this view, one might suppose that the brain lesions discussed above hinder consciousness, not by destroying an entire autonomously conscious neural assembly, but by destroying the perceptual end of a larger consciousness-producing loop, a perceptual-cognitive loop or PCL. But the question is: why have a loop at all? Are the perceptual processes themselves incapable of grouping features into wholes; do they need cognitive assistance? Do they need the activity of premotor neurons, of an "active" side?

The cognitive end of the loop, I suggest, serves largely as a tester and controller. The perceptual end does some primitive grouping procedures, and then passes its results along to the cognitive end, asking for approval: "Did I group too little, or enough?" The cognitive end seeks to integrate the results of the perceptual end with its knowledge and memory, and on this basis gives an answer. In short, it acts on the percepts, by trying to do things with them, by trying to use them to interface with memory and motor systems. It gives the answer "too little coherentization" if the proposed grouping is simply torn apart by contact with memory -- if different parts of the supposedly coherent percept connect with totally different remembered percepts, whereas the whole connects significantly with nothing. And when the perceptual end receives the answer "too little," it goes ahead and tries to group things together even more, to make things even more coherent. Then it presents its work to the cognitive end again. Eventually the cognitive end of the loop answers: "Enough!" Then one has an entity which is sufficiently coherent to withstand the onslaughts of memory.

Next, there is another likely aspect to the perceptual-cognitive interaction: perhaps the cognitive end also assists in the coherentizing process. Perhaps it proposes ideas for interpretations of the whole, which the perceptual end then approves or disapproves based on its access to more primitive features. This function is not in any way contradictory to the idea of the cognitive end as a tester and controller; indeed the two directions of control fit in quite nicely together.

Note that a maximally coherent percept is not desirable, because thought, perception and memory require that ideas possess some degree of flexibility. The individual features of a percept should be detectable to some degree, otherwise how could the percept be related to other similar ones? The trick is to stop the coherence-making process just in time.

But what exactly is "just in time"? There is not necessarily a unique optimal level of coherence. It seems more likely that each consciousness-producing loop has its own characteristic level of cohesion. Hartmann (1991) has proposed a theory which may be relevant to this issue: he has argued that each person has a certain characteristic "boundary thickness" which they place between the different ideas in their mind. Based on several questionnaire and interview studies, he has shown that this is a statistically significant method for classifying personalities. "Thin-boundaried" people tend to be sensitive, spiritual and artistic; they tend to blend different ideas together and to perceive a very thin layer separating themselves from the world. "Thick-boundaried" people, on the other hand, tend to be practical and not so sensitive; their minds tend to be more compartmentalized, and they tend to see themselves as very separate from the world around them. Hartmann gives a speculative account of the neural basis of this distinction. But the present theory of consciousness suggests an alternate account: that perhaps this distinction is biologically based on a difference in the "minimum cohesion level" accepted by the cognitive end of consciousness-producing loops.

The iterative processing of information by the perceptual-cognitive loop is what enables the same object to be disrupted by randomness again and again and again. And this, I claim, is what gives the feeling that one is conscious of some specific object. Without this iteration, consciousness is felt to lack a definite object; the object in question lasts for so short a time that it is just barely noticeable.

And what of the old aphorism, "Consciousness is consciousness of consciousness"? This reflexive property of consciousness may be understood as a consequence of the passage of potential coherentizations from the cognitive end to the perceptual end of the loop. The cognitive end is trying to understand what the perceptual end is doing; it is recognizing patterns in the series of proposed coherentizations and ensuing memory-caused randomizations. These higher-order patterns are then sent through the consciousness-producing loop as well, in the form of new instructions for coherentization. Thus the process that produces consciousness itself becomes transformed into an object of consciousness.

All this ties in quite nicely with the neural network theory of consciousness proposed by the British mathematician R. Taylor (1993). Taylor proposes that the consciousness caused by a given stimulus can be equated with the memory traces elicited by that stimulus. E.g. the consciousness of a sunset is the combination of the faint memory traces of previously viewed sunsets, or previously viewed scenes which looked like sunsets, etc. The PCL provides an analysis on a level one deeper than Taylor's theory; in other words, Taylor's theory is a consequence of the one presented here. For, if the PCL works as I have described, it follows that the cognitive end must always search for memory traces similar to the "stimulus" passed to it by the perceptual end.

What is Coherentization?

There is a missing link in the above account of the PCL: what, exactly, is this mysterious process of "coherentization," of boundary-drawing? Is it something completely separate from the ordinary dynamics of the mind? Or is it, on the other hand, an extension of these dynamics?

The chemistry involved is still a question mark, so the only hope of understanding coherentization at the present time is to bypass neuropsychology and try to analyze it from a general, philosophical perspective. One may set up a model of "whole objects" and "whole concepts," and ask: in the context of this model, what is the structure and dynamics of coherentization? In the Peircean perspective, both objects and ideas may be viewed as "habits" or "patterns," which are related to one another by other patterns, and which have the capacity to act on and transform one another. We then have the question of how a pattern can be coherentized, how its "component patterns" can be drawn more tightly together to form a more cohesive whole.

Here we may turn to the psynet model, and propose that: To coherentize is to make something autopoietic, or more robustly autopoietic. What consciousness does, when it coherentizes, is to make autopoietic systems. It makes things more self-producing.

From a neural point of view, one may say that those percepts which are most likely to survive in the evolving pool of neural maps, are those which receive the most external stimulation, and those which perpetuate themselves the best. External stimulation is difficult to predict, but the tendency toward self-perpetuation can be built in; and this is the most natural meaning for coherentization.

In this view, then, what the perceptual-cognitive loop does is to take a network of processes and iteratively make it more robustly self-producing. What I mean by "robustly self-producing" is: autopoietic, with a wide basin as an attractor of the cognitive equation (of magician dynamics). Any mathematical function can be reproduced by a great number of magician systems; some of these maps will be robustly self-producing and others will not. The trick is to find these most robustly self-producing systems. There are many possible strategies for doing this kind of search -- but one may be certain that the brain, if it does this kind of search, certainly does not use any fancy mathematical algorithm. It must proceed by a process of guided trial-and-error, and thus it must require constant testing to determine the basin size, and the degree of autopoiesis, of the current iterate. The consciousness, in this model, is in the testing, which disrupts the overly poor self-production of interim networks (the end result of the iterative process is something which leads to fairly little consciousness, because it is relatively secure against disruption by outside forces).

So, "coherentization" is not a catch-word devoid of content; it is a concrete process, which can be understood as a peculiar chemical process, or else modeled in a system-theoretic way. Thinking about neuronal groups yields a particularly elegant way of modeling coherentization: as the search for a large-basined autopoietic subsystem of magician dynamics. This gives a very concrete way of thinking about the coherentization of a complex pattern. To coherentize a pattern which is itself a system of simpler patterns, emerging cooperatively from each other, one must replace the component patterns with others that, while expressing largely the same regularities, emerge from each other in an even more tightly interlinked way.

8.4 SUBVERTING THE PERCEPTUAL-COGNITIVE LOOP

The perceptual-cognitive loop is important and useful -- but it does not go far enough. It explains how we become attentive to things; or, to put it differently, how we construct "things" by carrying out the process of conscious attention. But as humans we can do much more with our consciousness than just be attentive to things. We can introspect -- consciously monitor our own thought processes. We can meditate -- consciously fixate our consciousness on nothing whatsoever. We can creatively focus -- fix our consciousness on abstract ideas, forming them into wholes just as readily as we construct "physical objects." How do these subtle mental conditions arise from the "reductionistic" simplicity of the perceptual-cognitive loop?

Sketch of a Theory of Meditation

Let us begin with meditation -- in particular, the kind of meditation which involves emptying the mind of forms. This type of meditation might be called "consciousness without an object." In Zen Buddhism it is called zazen.

The very indescribability of the meditative state has become a cliche'. The Zen Buddhist literature, in particular, is full of anecdotes regarding the futility of trying to understand the "enlightened" state of mind. Huang Po, a Zen master of the ninth century A.D., framed the matter quite clearly:

Q: How, then, does a man accomplish this comprehension of his own Mind?

A: That which asked the question IS your own Mind; but if you were to remain quiescent and to refrain from the smallest mental activity, its substance would be seen as a void -- you would find it formless, occupying no point in space and falling neither into the category of existence nor into that of non-existence. Because it is imperceptible, Bodhidharma said: 'Mind, which is our real nature, is the unbegotten and indestructible Womb; in response to circumstances, it transforms itself into phenomena. For the sake of convenience, we speak of Mind as intelligence, but when it does not respond to circumstances, it cannot be spoken of in such dualistic terms as existence or nonexistence. Besides, even when engaged in creating objects in response to causality, it is still imperceptible. If you know this and rest tranquilly in nothingness -- then you are indeed following the Way of the Buddhas. Therefore does the sutra say: 'Develop a mind which rests on no thing whatever.'

The present theory of consciousness suggests a novel analysis of this state of mind that "rests on no thing whatever." Consider: the perceptual-cognitive loop, if it works as I have conjectured, must have evolved for the purpose of making percepts cohesive. The consciousness of objects is a corollary, a spin-off of this process. Consciousness, raw consciousness, was there all along, but it was not intensively focused on one thing. Meditative experience relies on subverting the PCL away from its evolutionarily proper purpose. It takes the intensity of consciousness derived from repeated iteration, and removes this intensity from its intended context, thus producing an entirely different effect.

This explains why it is so difficult to achieve consciousness without an object. Our system is wired for consciousness with an object. To regularly attain consciousness without an object requires the formation of new neural pathways. Specifically, I suggest, it requires the development of pathways which feed the perceptual end of the perceptual-cognitive loop random stimuli (i.e., stimuli that have no cognitively perceivable structure). Then the perceptual end will send messages to the cognitive end, as if it were receiving structured stimuli -- even though it is not receiving any structured stimuli. The cognitive then tries to integrate the random message into the associative memory -- but it fails, and thus the perceptual end makes a new presentation. And so on, and so on. What happens to the novice meditator is that thoughts from the associative memory continually get in the way. The cognitive end makes suggestions regarding how to coherentize the random input that it is receiving, and then these suggestions cycle around the loop, destroying the experience of emptiness. Of course these suggestions are mostly nonsense, since there is no information there to coherentize; but the impulse to make suggestions is quite strong and can be difficult to suppress. The cognitive end must be trained not to make suggestions regarding random input, just as the perceptual end must be trained to accept random input from sources other than the normal sensory channels.

This is not an attempt to explain away mystical experience -- quite the opposite. It is an attempt to acknowledge the ineffable, ungraspable nature of such experience. As argued in detail in The Structure of Intelligence, there is no objective notion of "randomness" for finite structures like minds and brains. Random is defined only relative to a certain observer (represented in computation theory as a certain Universal Turing Machine). So, to say that the meditative state involves tapping into randomness, is to say that the meditative state involves tapping into some source that is beyond one's own cognitive structures. Whether this source is quantum noise, thermal noise or the divine presence is an interesting question, but one that is not relevant to the present theory, and is probably not resolvable by rational means.

Creative Inspiration

Finally, let us peek ahead to the final chapter for a moment, and have a look at the role of the perceptual-cognitive loop in the process of creative inspiration.

As will be observed in detail later, many highly creative thinkers and artists have described the role of consciousness in their work as being very small. The biggest insights, they have claimed, always pop into the consciousness whole, with no deliberation or decision process whatsoever -- all the work has been done elsewhere. But yet, of course, these sudden insights always concern some topic that, at some point in the past, the person in question has consciously thought about. A person with no musical experience whatsoever will never all of a sudden have an original, fully detailed, properly constructed symphony pop into her head. Someone who has never thought about physics will not wake up in the middle of the night with a brilliant idea about how to construct a unified field theory. Clearly there is something more than "divine inspiration" going on here. The question is: what is the dynamics of this subtle interaction between consciousness and the unconscious?

In the present theory of consciousness, there is no rigid barrier between consciousness and the unconscious; everything has a certain degree of consciousness. But only in the context of an iterative loop does a single object become fixed in consciousness long enough that raw consciousness becomes comprehensible as consciousness of that object. The term "unconscious" may thus be taken to refer to those parts of the brain that are not directly involved in a consciousness-fixing perceptual/cognitive loop.

This idea has deep meaning for human creative process. In any creative endeavor, be it literature, philosophy, mathematics or science, one must struggle with forms and ideas, until one's mind becomes at home among them; or in other words, until one's consciousness is able to perceive them as unified wholes. Once one's consciousness has perceived an idea as a coherent whole -- then one need no longer consciously mull over that idea. The idea is strong enough to withstand the recombinatory, self-organizing dynamics of the unconscious. And it is up to these dynamics to produce the fragments of new insights -- fragments which consciousness, once again entering the picture, may unify into new wholes.

So: without the perceptual-cognitive loop to aid it, the unconscious would not be significantly creative. It would most likely recombine all its contents into a tremendous, homogeneously chaotic mush ... or a few "islands" of mush, separated by "dissociative" gaps. But the perceptual-cognitive loop makes things coherent; it places restrictions on the natural tendency of the unconscious to combine and synthesize. Thus the unconscious is posed the more difficult problem of relating things with one another in a manner compatible with their structural constraints. The perceptual-cognitive loop produces wholes; the unconscious manipulates these wholes to produce new fragmentary constructions, new collections of patterns. And then the perceptual-cognitive loop takes these new patterns as raw material for constructing new wholes.

And what, then, is the relation between the creative state and the meditative state? Instead of a fixation on the void of pure randomness, the creative condition is a fixation of consciousness on certain abstract forms. The secret of the creative artist or scientist, I propose, is this: abstract forms are perceived with the reality normally reserved for sense data. Abstract forms are coherentized with the same vigor and effectiveness with which everyday visual or aural forms are coherentized in the ordinary human mind. Like the meditative state, the creative state subverts the perceptual-cognitive loop; it uses it in a manner quite different than that for which evolution intended it.

One may interpret this conclusion in a more philosophical way, by observing that the role of the perceptual-cognitive loop is, in essence, to create reality. The reality created is a mere "subjective reality," but for the present purposes, the question of whether there is any more objective reality "out there" is irrelevant. The key point is that the very realness of the subjective world experienced by a mind is a consequence of the perceptual-cognitive loop and its construction of boundaries around entities. This means that reality depends on consciousness in a fairly direct way: and, further, it suggests that what the creative subself accomplishes is to make abstract forms and ideas a concrete reality.

8.5 THE EVOLUTION OF THE PERCEPTUAL-COGNITIVE LOOP

Julian Jaynes, in The Breakdown of the Bicameral Mind, has argued that consciousness evolved suddenly rather than rapidly, and that this sudden evolution occurred in the very recent past. He believes that the humans of Homeric times were not truly conscious in the sense that we are. His argument is based primarily on literary evidence: the characters in the Odyssey and other writings of the time never speak of an "inner voice" of consciousness. Instead they refer continually to the voices of the gods. Jaynes proposes that this "hearing of voices," today associated with schizophrenia, was in fact the root of modern consciousness. Eventually the voice was no longer perceived as a voice, but as a more abstract inner guiding force, in other words "consciousness."

Jaynes' theory is admirable in its elegance and boldness; unfortunately, however, it makes very little scientific sense. Inferring the history of mind from the history of literary style is risky, to say the least; and Jaynes' understanding of schizophrenia does not really fit with what we know today. But despite the insufficiency of his arguments, I believe there is a kernel of truth in Jayne's ideas. In this section I will use the theory of the perceptual-cognitive loop to argue that the idea of a sudden appearance of modern consciousness is quite correct, though for very different reasons than those which Jaynes put forth.

The perceptual-cognitive loop relies on two abilities: the perceptual ability to recognize elementary "features" in sense data, and the cognitive ability to link conjectural "wholes" with items in memory. A sudden jump in either one of these abilities could therefore lead to a sudden jump in consciousness. In EM I argue that the memory, at some point in early human history, underwent a sudden structural "phase transition." I suggest that this transition, if it really occurred, would have caused as a corollary effect a sudden increase in the intensity of consciousness.

The argument for a phase transition in the evolution of memory rests on the idea of the heterarchical subnetwork of the psynet. From the view of the heterarchical network, mind is an associative memory, with connections determined by habituation. So, suppose that, taking this view, one takes N items stored in some organism's memory, and considers two items to be "connected" if the organism's mind has detected pragmatically meaningful relations between them. Then, if the memory is sufficiently complex, one may study it in an approximate way by assuming that these connections are drawn "at random." Random graph theory becomes relevant.

Recall the theory of random graph thresholds, introduced in Chapter Seven. The crucial question, from the random graph theory point of view, is: what is the chance that, given two memory items A and B, there is a connection between A and B?

For instance, if this chance exceeds the value 1/2, then the memory is almost surely a "nearly connected graph," in the sense that one can follow a chain of associations from almost any memory item to almost any other memory item. On the other hand, if this chance is less than 1/2, then the memory is almost certainly a "nearly disconnected graph": following a chain of associations from any one memory item will generally lead only to a small subset of "nearby" memory items. There is a "phase transition" as the connection probability passes 1/2. And this is merely one among many interesting phase transitions.

The evolutionary hypothesis, then, is this. Gradually, the brain became a better and better pattern recognition machine; and as this happened the memory network became more and more densely connected. In turn, the more effective memory became, the more useful it was as a guide for pattern recognition. Then, all of a sudden, pattern recognition became useful enough that it gave rise to a memory past the phase transition. Now the memory was really useful for pattern recognition: pattern recognition processes were able to search efficiently through the memory, moving from one item to the next to the next along a path of gradually increasing relevance to the given object of study. The drastically increased pattern recognition ability filled the memory in even more -- and all of a sudden, the mind was operating on a whole new level.

And one consequence of this "new level" of functioning may have been -- an effective perceptual-cognitive loop. In a mind without a highly active associative memory, there is not so much need for a PCL: coherentization is a protection against reorganizing processes which are largely irrelevant to a pre-threshold memory network. In a complex, highly interconnected memory, reorganization is necessary to improve associativity, but in a memory with very few connections, there is unlikely to be any way of significantly improving associativity. Furthermore, even if the pre-threshold memory did have need of a PCL, it would not have the ability to run the loop through many iterations: this requires each proposed coherentization to be "tested" with numerous different connections. But if few connections are there in the first place, this will only very rarely be possible.

So, in sum, in order for the cognitive end of the loop to work properly, one needs a quality associative memory. A phase transition in associative memory paves the way for a sudden emergence of consciousness.

In this way one arrives at a plausible scientific account of the sudden emergence of consciousness. It is a speculative account, to be sure -- but unlike Jaynes' account it relies on precise models of what is going on inside the brain, and thus it is falsifiable, in the sense that it could be disproved by appropriately constructed computer simulations. Consciousness of objects emerging out of raw consciousness as a consequence of phase transitions in the associative memory network -- certainly, this picture has simplicity and elegance on its side.

8.6 HALLUCINATIONS AND REALITY DISCRIMINATION

The theory of the Perceptual-Cognitive Loop may seem excessively abstract and philosophical. What does it have to say about the concrete questions that concern clinical or experimental psychologists? To show that it is in fact quite relevant to these issues, I will here consider an example of a psychological phenomenon on which the psynet model and PCL shed light: hallucinations.

Hallucinations have proved almost as vexing to psychological theorists as they are to the individuals who experience them. A variety of theories have been proposed, some psychological, some neurobiological, and some combining psychological and physiological factors (for reviews see Slade and Bentall, 1988; Slade, 1994). Jaynes, mentioned above, believed hallucinations to be the evolutionary root of consciousness! However, none of the existing theories has proved entirely satisfactory.

Perhaps the most precise and comprehensive theory to date is that of Slade and Bentall (1988; Slade, 1994), which holds that hallucinations are due to a lack of skill at "reality-testing" or reality discrimination. According to this theory, individuals prone to hallucinate do not generate internal stimuli differently from the rest of us; they merely interpret these stimuli differently.

The Slade and Bentall theory provides a good qualitative understanding of a variety of clinical observations and experimental results regarding hallucinatory experience. However, the direct evidence for the theory is relatively scant. There are experiments which show that individuals with high Launay-Slade Hallucination scores have a bias toward classifying signals as real, instead of imagined (Slade and Bentall, 1988; Feelgood and Rantsen, 1994). But these experiments only indicates a correlation between poor reality discrimination and hallucination. They do not establish a causal relationship, which is what Slade (1994) and Slade and Benthall (1988) posit.

Taking up the same basic conceptual framework as Slade and Benthall, I will argue that, in fact, poor reality discrimination and hallucination produce one another, given their initial impetus by the cognitive trait that Hartmann (1991) calls "thin-boundariedness." In short, certain individuals place particularly permeable boundaries around entities in their minds, including their self-systems. This is a result of individual differences in the nature of consciousness-embodying Perceptual-Cognitive Loops. The tendency to construct permeable boundaries, it is argued, encourages both hallucination and poor reality discrimination, which in turn are involved in positive feedback relationships with each other.

These ideas are an alternate and, I suggest, more psychologically plausible explanation of the data which Slade and Bentall take in support of their theory of hallucinations. They are a simple, nontechnical example of how the psynet model encourages one to take a longer, deeper look at the mind than conventional psychological theory requires.

Six Possible Explanations

Before embarking on our analysis of hallucination, some methodological clarifications may be in order. It is well-known that "correlation does not imply causation." However, attention is rarely drawn to the full realm of possible explanations for a correlation between two variables. Given a correlation between A and B, there are at least six distinct possible explanations, all of which must be taken seriously.

The three commonly-recognized explanations of a correlation between A and B are as follows:

1) A may cause B

2) B may cause A

3) A and B may both be caused by some other factor C.

The three additional explanations ensue from dynamical systems theory (Abraham and Shaw, 1991; Goertzel, 1994), which indicates that two variables may in fact "cause each other" due to positive feedback. They are as follows:

4) A and B may cause each other (with no initial prompting

other than random fluctations)

5) A and B may cause each other, once initially activated

by some other factor C

6) A and B may cause each other, in a process of mutual

feedback with some other factor C

These possibilities are all observed in connectionist models on a routine basis (Rumelhart and McClelland, 1986), and in fact have a long history in Oriental psychology, going back at least to the notion of "dependent causation" in Buddhist psychology (Crook and Rabgyas, 1988).

Let us now consider all six of these possibilities in the context of reality discrimination and hallucination.

First, there is the possibility that unreliable reality discrimination causes hallucinations; i.e. that, as has been claimed, "hallucinations result from a dramatic failure of the skill of reality discrimination" (Slade and Benthall, 1988).

Then there is the possibility that things work the other way around: that hallucinations cause unreliable reality discrimination. This hypothesis is also quite plausible. For, consider reality discrimination as a categorization problem. One might reason as follows. In ordinary experience, there are substantial differences between internally generated-stimuli and externally-generated stimuli. Thus, it is easy to "cluster" stimuli into two categories: real versus imaginary. But, in the experience of a person prone to hallucinations, there is more of a continuum from internally- generated to externally-generated stimuli. The two categories are not so distinct, and thus categorization is not such an easy problem. Quite naturally, for such individuals, skill at distinguishing the two categories will be below par.

Third, there is the possibility that both hallucinations and unreliable reality discrimination are caused by some other factor, or some other group of factors. This of course raises the question of what these other factor(s) might be.

Fourth, there is the possibility that hallucinations and unreliable reality discrimination cause each other. In other words, the two may stand in a positive feedback relation to each other: the more one hallucinates, the worse one does reality discrimination; and the worse one does reality discrimination, the more one hallucinates. This possibility is inclusive of the first two possibilities.

Finally, there are combinations of the third and fourth options. The fifth possibility is that an external factor prompts a slight initial propensity toward poor reality discrimination and hallucination, which then blossoms, by positive feedback, into more prominent phenomena. And the sixth possibility is that poor reality discrimination and hallucinations to emerge cooperatively, by positive feedback, in conjunction with some other factor.

Here I will argue for the fifth possibility. My argument is perhaps somewhat speculative, but no more so than the arguments of Slade (1994) and Slade and Benthall (1988). It represents a much more natural interpretation of the data which they adduce in favor of their theory.

Consciousness Creates Reality

The first key conceptual point to remember, in discussing hallucinations, is that even in the normal (non-hallucinating) individual, the role of consciousness is to create percepts and concepts from stimuli, or in other words, to create subjective reality. And the second key point is that not everyone creates reality in the same way; not only do the contents of subjective reality differ from one person to the other, but the internal parameters of the reality-creating process also differ. Hallucinations are a matter of confusing internally generated stimuli for external reality.

The relation between hallucinations and consciousness has been discussed in the past. In particular, Frith (1979) has used the limited channel capacity of consciousness to explain the occurence of hallucinations. Because of the limited capacity of consciousness, it is argued, only one hypothesis regarding the nature of a given stimulus can be consciously entertained at a given time. Hallucinations occur when preconscious information about a stimulus is not filtered out, so that consciousness becomes crowded with information, and the correct hypothesis (that the stimulus is internally rather than externally generated) is pushed out. It is worth emphasizing that the ideas presented here are quite different from these. Here we are arguing that individual differences in conscious information processing are crucial for hallucination. Frith, on the other hand, posits that the nature of conscious processing is invariant with respect to the propensity toward hallucination, while the type of information fed into consciousness varies.

How do individuals prone to hallucination differ in their creation of subjective reality? According to the PCL approach, the key stage is judgement of "enough." The PCL coherentizes thought-and-percept-systems, and it keeps going until they are coherent enough -- but what is "enough"? A maximally coherent percept is not desirable, because thought, perception and memory require that ideas possess some degree of flexibility. The individual features of a percept should be detectable to some degree, otherwise how could the percept be related to other similar ones? The trick is to stop the coherence-making process at just the right time. However, it should not be assumed that there is a unique optimal level of coherence. It seems more likely that each consciousness-producing loop has its own characteristic level of cohesion.

And this is where the ideas of Hartmann (1991) become relevant. Hartmann has argued that each person has a certain characteristic "boundary thickness" which they place between the different ideas in their mind. Thick-boundaried people perceive a rigid division between themselves and the external world, and they create percepts and concepts which hold together very tightly. Thin-boundaried people, on the other hand, perceive a more permeable boundary between themselves and the external world, and create more flexible percepts and concepts. Thin-boundaried people are more likely to have hallucinations, and also more likely to have poor reality discrimination (Hartmann, 1988). Thin-boundariedness, I suggest, is the third major factor involved in the dynamics of reality discrimination and hallucination.

Based on several questionnaire and interview studies, Hartmann has shown that boundary thickness is a statistically significant method for classifying personalities. "Thin-boundaried" people tend to be sensitive, spiritual and artistic; they tend to blend different ideas together and to perceive a very thin layer separating themselves from the world. "Thick-boundaried" people, on the other hand, tend to be practical and not so sensitive; their minds tend to be more compartmentalized, and they tend to see themselves as very separate from the world around them. The difference between thin and thick-boundaried personalities, I suggest, lies in the "minimum cohesion level" accepted by the cognitive ends of PCL's. Hartmann himself gives a speculative account of the neural basis of this distinction, which is quite consistent with these ideas.

A Theory of Hallucination

Now we are ready to tie together the various threads that have been introduced, into a coherent theory of the interrelation between reality discrimination, hallucination, and boundary-drawing. I will argue that thin-boundariedness provides the initial impetus for both poor reality-discrimination and hallucination, which then go on to support and produce each other via a relation of positive feedback (Goertzel, 1994).

First, consider the thin-boundariedness for reality discrimination. As compared to an average person, an exceptionally thin-boundaried individual will have PCL's that place less rigid, more flexible boundaries around mental entities. This implies that external, real-world information will be stored and conceptualized more similarly to internal, imagined information. Such a person will naturally have more difficulty distinguishing real stimuli from non-real stimuli -- a conclusion which is supported by Hartmann's data. Thin-boundariedness is bound up with poor reality discrimination in a direct way.

Next, to see the direct relationship between thin-boundariedness and hallucination, it is necessary to consider the iterative nature of PCL's. The key point is that a PCL does not merely classify data, it constructs data. It is responsible for developing stimuli into percepts in particular ways. Thus, if it has judged a certain stimulus to be "real," it will develop it one way; but if has judged the stimulus to be "imaginary," it will develop it another way. In a thin-boundaried person, there is less likely to be a rigid distinction between different ways of developing stimuil into percepts; so it is much more likely, that as a perceptual-cognitive loop iterates, internal stimuli will be developed as if they were external stimuli. What this means is that thin-boundaries people will be more likely to have internal stimuli that are as vividly and intricately developed as external stimuli -- i.e., as suggested by Hartmann's data, they will be more likely to hallucinate.

So, thin-boundariedness has the ability to lead to both hallucination and poor reality discrimination. We have already suggested, however, that the latter two traits have the propensity to support each other by positive feedback. Poor reality discrimination causes hallucination, according to the mechanisms identified by Slade and Bentall. And hallucinations may cause poor reality discrimination, by giving the mind a more confusing data set on which to base its categorization of internal versus external stimuli.

The view which emerges from these considerations is consistent with the fifth possible relationship listed in Section 2: "A and B may cause each other, once initially activated by some other factor C." Thin-boundariedness sets off a process of mutual activation between the traits of poor reality discrimination and hallucination.

One might wonder whether the sixth relationship is not the correct one. Perhaps thin-boundariedness is itself produced, in part, by hallucinations or poor reality discrimination. But this conclusion does not seem to hold up. There is no clear causative link between hallucination and thin-boundariedness. And the polarity of the influence of poor reality-discrimination on boundary-drawing seems quite open. In some individuals, poor reality discrimination might cause excessive thin-boundariedness; but in others, it might instead cause excessive thick-boundariedness. In yet others, it might do neither. The question is whether an individual tends to err on the side of making external stimuli "loose" or making internal stimuli "tight" -- or errs in a non-biased, neutral way.

Conclusion

I have used the PCL and the notion of mental-process intercreation to propose a new explanation of the correlation between hallucination and reality discrimination. My explanation differs from that of Slade and Bentall (1988) in that it posits a circular causality between the two factors, initiated by a third factor of thin-boundariedness. While the Slade and Bentall theory is simpler, the present theory is psychologically more plausible, and does greater justice to the complexity of the human mind.

The question arises whether the present theory is empirically distinguishable from the Slade and Bentall theory. Such a distinction, it seems, could be made only by a study of the development of hallucinations in individuals prone to hallucinate, say schizophrenic individuals. The Slade and Bentall view, in which poor reality discrimination causes hallucination, would predict that poor reality discrimination should precede hallucinations, and should not increase proportionately to hallucinations. The present theory predicts that the two factors should increase together, gradually, over the course of development.

8.7 PROPRIOCEPTION OF THOUGHT

Now let us return to more fundamental issues -- to the qeustion of the nature of consciousness itself. Raw awareness, I have said, is essentially random. The structure of consciousness, which exploits raw awareness, is that of an iteratively coherentizing perceptual-cognitive loop. Is there any more that can be said?

I believe that there is. One can make very specific statements about the types of magician systems involved in states of consciousness. Before we can get to this point, however, some preliminary ideas must be introduced. Toward this end, we will draw inspiration from a somewhat unlikely-sounding direction: the philosophical thought of the quantum physicist David Bohm, as expressed (among other places) in his book Thought as a System (1988).

Bohm views thought as a system of reflexes - - habits, patterns - - acquired from interacting with the world and analyzing the world. He understands the self- reinforcing, self- producing nature of this system of reflexes. And he diagnoses our thought- systems as being infected by a certain malady, which he calls the absence of proprioception of thought.

Proprioceptors are the nerve cells by which the body determines what it is doing - - by which the mind knows what the body is doing. To understand the limits of your proprioceptors, stand up on the ball of one foot, stretch your arms out to your sides, and close your eyes. How long can you retain your balance? Your balance depends on proprioception, on awareness of what you are doing. Eventually the uncertainty builds up and you fall down. People with damage to their proprioceptive system can't stay up as long as as the rest of us.

According to Bohm,

... [T]hought is a movement - - every reflex is a movement really. It moves from one thing to another. It may move the body or the chemistry or just simply the image or something else. So when 'A' happens 'B' follows. It's a movement. All these reflexes are interconnected in one system, and the suggestion is that they are not in fact all that different. The intellectual part of thought is more subtle, but actually all the reflexes are basically similar in structure. Hence, we should think of thought as a part of the bodily movement, at least explore that possibility, because our culture has led us to believe that thought and bodily movement are really two totally different spheres which are no basically connected. But maybe they are not different. The evidence is that thought is intimately connected with the whole system. If we say that thought is a reflex like any other muscular reflex - - just a lot more subtle and more complex and changeable - - then we ought to be able to be proprioceptive with thought. Thought should be able to perceive its own movement. In the process of thought there should be awareness of that movement, of the intention to think and of the result which that thinking produces. By being more attentive, we can be aware of how thought produces a result outside ourselves. And then maybe we could also be attentive to the results it produces within ourselves. Perhaps we could even be immediately aware of how it affects perception. It has to be immediate, or else we will never get it clear. If you took time to be aware of this, you would be bringing in the reflexes again. So is such proprioception possible? I'm raising the question....

The basic idea here is quite simple. If we had proprioception of thought, we could feel what the mind was doing, at all times -- just as we feel what the body is doing. Our body doesn't generally carry out acts on the sly, without our observation, understanding and approval. But our mind (our brain) continually does exactly this. Bohm traces back all the problems of the human psyche and the human world -- warfare, environmental destruction, neurosis, psychosis -- to this one source: the absence of proprioception of thought. For, he argues, if we were really aware of what we were doing, if we could fully feel and experience everything we were doing, we would not do these self-destructive things.

An alternate view of this same idea is given by the Zen master Thich Nhat Hanh (1985), who speaks not of proprioception but of "mindfulness." Mindfulness means being aware of what one is doing, what one is thinking, what one is feeling. Thich Nhat Hanh goes into more detail about what prevents us from being mindful all the time. In this connection he talks about samyojama - - a Sanskrit word that means "internal formations, fetters, or knots." In modern terminology, samyojama are nothing other than self- supporting thought- systems:

When someone says something unkind to us, for example, if we do not understand why he said it and we become irritated, a knot will be tied in us. The lack of understanding is the basis for every internal knot. If we practice mindfulness, we can learn the skill of recognizing a knot the moment it is tied in us and finding ways to untie it. Internal formations need our full attention as soon as they form, while they are still loosely tied, so that the work of untying them will be easy.

Self- supporting thought systems, systems of emotional reflexes, guide our behaviors in all sorts of ways. Thich Nhat Hanh deals with many specific examples, from warfare to marital strife. In all cases, he suggests, simple sustained awareness of one's own actions and thought processes - - simple mindfulness - - will "untie the knots," and free one from the bundled, self- supporting systems of thought/feeling/behavior.

Yet nother formulation of the same basic concept is given by psychologist Stanislaw Grof (1994). Grof speaks, not of knots, but rather of "COEX systems" - - systems of compressed experience. A COEX system is a collection of memories and fantasies, from different times and places, bound together by the self- supporting process dynamics of the mind. Elements of a COEX system are often joined by similar physical elements, or at least similar emotional themes. An activated COEX system determines a specific mode of perceiving and acting in the world. A COEX system is an attractor of mental process dynamics, a self- supporting subnetwork of the mental process network, and, in Buddhist terms, a samyojama or knot. Grof has explored various radical techniques, including LSD therapy and breathwork therapy, to untie these knots, to weaken the grip of these COEX systems. The therapist is there to assist the patient's mental processes, previously involved in the negative COEX system, in reorganizing themselves into a new and more productive configuration.

Reflexes and Magicians

Before going further along this line, we must stop to ask: What does this notion of "proprioception of thought," based on a neo-behaviourist, reflex-oriented view of mind, have to do with the psynet model? To clarify the connection, we must first establish the connection between reflexes and magicians. The key idea here is that, in the most general sense, a habit is nothing other than a pattern. When a "reflex arc" is established in the brain, by modification of synaptic strengths or some other method, what is happening is that this part of the brain is recognizing a pattern in its environment (either in the other parts of the brain to which it is connected, or in the sensory inputs to which it is connected).

A reflex, in the psynet model, may be modelled as the interaction of three magicians: one for perception, one for action, and one for the "thought" (i.e. for the internal connection between perception and action). The "thought" magician must learn to recognize patterns among stimuli presented at different times and generate appropriate responses.

This view of reflexes is somewhat reminiscent of the triangular diagrams introduced by Gregory Bateson in his posthumous book Angels Fear (1989). I call these diagrams "learning triads." They are a simple and general tool for thinking about complex, adaptive systems. In essence, they are a system-theoretic model of the reflex arc.

Bateson envisions the fundamental triad of thought, perception and action, arranged in a triangle:

THOUGHT

/ \

PERCEPTION -- ACTION

The logic of this triad is as follows. Given a percept, constructed by perceptual processes from some kind of underlying data, a thought process decides upon an action, which is then turned into a concrete group of activities by an action process. The results of the actions taken are then perceived, along with the action itself, and fed through the loop again. The thought process must judge, on the basis of the perceived results of the actions, and the perceived actions, how to choose its actions the next time a similar percept comes around.

Representing the three processes of THOUGHT, PERCEPTION and ACTION as pattern/process magicians, the learning triad may be understood as a very basic autopoietic mental process system. Furthermore, it is natural to conjecture that learning triads are autopoietic subsystems of magician system dynamics.

The autopoiesis of the system is plain: as information passes around the loop, each process is created by the other two that come "before" it. The attraction is also somewhat intuitively obvious, but perhaps requires more comment. It must be understood that no particular learning triad is being proposed as an attractor, in the sense that nearby learning triads will necessarily tend to it. The claim is rather that the class of learning triads constitutes a probabilistic strange attractor of magician dynamics, meaning that a small change in a learning triad will tend to produce something that adjusts itself until it is another learning triad. If this is true, then learning triads should be stable with respect to small perturbations: small perturbations may alter their details but will not destroy their basic structure as a learning mechanism.

Pattern, Learning and Compression

We have cast Bohm's reflex-oriented view of mind in terms of pattern/process magicians. A reflex arc is, in the psynet model, re-cast as a triadic autopoietic magician system. In this language, proprioception of thought -- awareness of what reflexes have produced and are producing a given action -- becomes awareness of the magician dynamics underlying a given behaviour.

In this context, let us now return to the notion of pattern itself. The key point here is the relation between pattern and compression. Recall that to recognize a pattern in something is to compress it into something simpler - - a representation, a skeleton form. Given the overwhelmingly vast and detailed nature of inner and outer experience, it is inevitable that we compress our experiences into abbreviated, abstracted structures; into what Grof calls COEX's. This is the function of the hierarchical network: to come up with routines, procedures, that will function adequately in a wide variety of circumstances.

The relation between pattern and compression is well-known in computer science, in the fields of image compression and text compression. In this contexts, the goal is to take a computer file and replace it with a shorter file, containing the same or almost the same contents. Text compression is expected to be lossless: one can reconstruct the text exactly from the compressed version. On the other hand, image compression is usually expected to be lossy. The eye doesn't have perfect acuity, and so a bit of error is allowed: the picture that you reconstruct from the compressed file doesn't have to be exactly the same as the original.

Psychologically, the result of experience compression is a certain ignorance. We can never know exactly what we do when we lift up our arm to pick up a glass of water, when we bend over to get a drink, when we produce a complex sentence like this one, when we solve an equation or seduce a woman. We do not need to know what we do: the neural network adaptation going on in our brain figures things out for us. It compresses vast varieties of situations into simple, multipurpose hierarchical brain structures. But having compressed, we no longer have access to what we originally experienced, only to the compressed form. We have lost some information.

For instance, a man doesn't necessarily remember the dozens of situations in which he tried to seduce women (successfully or not). The nuances of the different womens' reactions, the particular situations, the moods he was in on the different occasions -- these are in large part lost. What is taken away is a collection of abstracted patterns that the mind has drawn out of these situations.

Or, to take another example, consider the process of learning a tennis serve. One refines one's serve over a period of many games, by a process of continual adaptation: this angle works better than that, this posture works better so long as one throws the ball high enough, etc. But what one takes out of this is a certain collection of motor processes, a collection of "serving procedures." It may be that one's inferences regarding how to serve have been incorrect: that, if one had watched one's serving attempts on video (as is done in the training of professional tennis players), one would have derived quite different conclusions about how one should or should not serve. But this information is lost, it is not accessible to the mind: all that is available is the compressed version, i.e., the serving procedures one has induced. Thus, if one is asked why one is serving the way one is, one can never give a decent answer. The answer is that the serve has been induced by learning triads, from a collection of data that is now largely forgotten.

The point is that mental pattern recognition is in general highly lossy compression. It takes place in purpose-driven learning triads. One does not need to recognize all the patterns in one's tennis serving behavior -- enough patterns to generate the full collection of data at one's disposal. One only wants those patterns that are useful to one's immediate goal of developing a better serve. In the process of abstracting information for particular goals (in Bohm's terms, for the completion of particular reflex arcs), a great deal of information is lost: thus psychological pattern recognition, like lossy image compression, is a fundamentally irreversible process.

This is, I claim, the ultimate reason for what Bohm calls the absence of proprioception of thought. It is the reason why mindfulness is so difficult. The mind does not know what it is doing because it can do what it does far more easily without the requirement to know what it is doing. Proceeding blindly, without mindfulness, thought can wrap up complex aggregates in simple packages and proceed to treat the simple packages as they were whole, fundamental, real. This is the key to abstract symbolic thought, to language, to music, mathematics, art. Intelligence itself rests on compression: on the substitution of packages for complex aggregates, on the substitution of tokens for diverse communities of experiences. It requires us to forget the roots of our thoughts and feelings, in order that we may use them as raw materials for building new thoughts and feelings.

If, as Bohm argues, the lack of proprioception of thought is the root of human problems, then the only reasonable conclusion would seem to be that human problems are inevitable.

8.8 THE ALGEBRA OF CONSCIOUSNESS

With these philosophical notions under our belt, we are now ready to turn to a most crucial question: if the purpose of consciousness is to create autopoietic systems, then what sorts of autopoietic systems does consciousness create? The answer to this question might well be: any kind of autopoietic system. I will argue, however, that this is not the case: that in fact consciousness produces very special sorts of systems, namely, systems with the structure of quaternionic or octonionic algebras.

In order to derive this somewhat surprising conclusion from the psynet model, only one additional axiom will be required, namely, that the autopoietic systems constructed by consciousness are "timeless," without an internal sense of irreversibility (an "arrow of time"). I.e.,

In the magician systems contained in consciousness, magician operations are reversible

In view of the discussion in the previous section, an alternate way to phrase this axiom is as follows:

At any given time, proprioception of thought extends through the contents of consciousness

Bohm laments that the mind as a whole does not know what it is doing. I have argued that, on grounds of efficiency, the mind cannot know what it is doing. It is immensely more efficient to compress experience in a lossy, purpose-driven way, than to maintain all experiences along with the patterns derived from them. However, this argument leaves room for special cases in which thought is proprioceptive. I posit that consciousness is precisely such a special case. Everything that is done in consciousness is explicitly felt, in the manner of physical proprioception: it is there, you can sense its being, feel it move and act.

Of course, physical proprioception can be unconscious; and so can mental proprioception. One part of the mind can unconsciously sense what another part is doing. The point, however, is that conscious thought-systems are characterized by self-proprioception. I will take this an an axiom, derived from phenomenology, from individual experience. This axiom is not intended as an original assertion, but rather as a re-phrasing of an obvious aspect of the very definition of consciousness. It should hardly be controversial to say that a conscious thought or thought-system senses itself.

In the context of the psynet model, what does this axiom mean? It means, that, within the scope of consciousness, magician processes are reversible. There is no information loss. What is done, can be undone.

Algebraically, reversibility of magician operations corresponds to division. For, if multiplication represents both action and pattern recognition, then the inverse under multiplication is thus an operation of undoing. If

A * B = C

this means that by acting on B, A has produced this pattern C; and thus, in the context of the cognitive equation, that C is a pattern which A has recognized in B. Now the inverse of A, when applied to C, yields

A-1 * C = B

In other words, it restores the substrate from the pattern: it looks at the pattern C and tells you what the entity was that aa recognized the pattern in.

For a non-psychological illustration, let us return, for a moment, to the example of text compression. A text compression algorithm takes a text, a long sequence of symbols, and reduces it to a much shorter text by eliminating various redundancies. If the original text is very long, then the shorter text, combined with the decompression algorithm, will be a pattern in the original text. Formally, if B is a text, and A is a compression algorithm, then A * B = C means that C is the pattern in B consisting of the compressed version of a plus the decompression algorithm. C-1 is then the process which transforms C into B; i.e., it is the process which causes the decompression algorithm to be applied to the compressed version of A, thus reconstituting the original A. The magician a compresses, the magician A-1 decompresses.

So, proprioception of thought requires division. It requires that one does not rely on patterns as lossy, compressed versions of entities; that one always has access to the original entities, so one can see "what one is doing." Suppose, then, one has a closed system of thoughts, a mental system, in which division is possible; in which mental process can proceed unhindered by irreversibility. Recall that mental systems are subalgebras. The conclusion is that proprioceptive mental systems are necessarily division algebras: they are magician systems in which every magician has an inverse. The kinds of algebras which consciousness constructs are division algebras.

This might at first seem to be a very general philosophical conclusion. However, it turns out to place very strict restrictions on the algebraic structure of consciousness. For, as is well-known in abstract algebra, the finite-dimensional division algebras are very few indeed.

Quaternions and Octonions

The real number line is a division algebra. So are the complex numbers. There is no three-dimensional division algebra: no way to construct an analogue of the complex numbers in three dimensions. However, there are division algebras in four and eight dimensions; these are called the quaternions and the octonions (or Cayley algebra), see e.g. Kurosh, (1963).

The quaternions are a group consisting of the entities {1,i,j,k} and their "negatives" {-1,-i,-j,-k}. The group's multiplication table is defined by the products

i * j = k

j * k = i

k * i = j

i * i = j * j = k * k = -1,

This is a simple algebraic structure which is distinguished by the odd "twist" of the multiplication table according to which any two of the three quantities {i,j,k} are sufficient to produce the other.

The real quaternions are the set of all real linear combinations of {1,i,j,k}, i.e., the set of all expressions a + bi + cj + dk where a, b, c and d are real. They are a four-dimensional, noncommutative extension of the complex numbers, with numerous applications in physics and mathematics.

Next, the octonions are the algebraic structure formed from the collection of entities q + Er, where q and r are quaternions and E is a new element which, however, also satisfies E2 = -1. These may be considered as a vector space over the reals, yielding the real octonions. While the quaternions are non-commutative, the octonions are also non-associative. The octonions have a subtle algebraic structure which is rarely if ever highlighted in textbooks, but which has been explored in detail by Onar Aam and Tony Smith (personal communication). The canonical basis for the octonion algebra is given by (i,j,k,E,iE,jE,kE). Following a suggestion of Onar Aam, I will adopt the simple notation I = iE, J = jE, K = kE, so that the canonical basis becomes (i,j,k,E,I,J,K).

Both the real quaternions and the real octonions have the property of allowing division. That is, every element has a unique multiplicative inverse, so that the equation A * B = C can be solved by the formula A = B-1 * C. The remarkable fact is that these are not only good examples of division algebras, they are just about the only reasonable examples of division algebras. One may prove that all finite division algebras have order 1, 2, 4 or 8. Furthermore, the only division algebras with the property of alternativity are the real, complexes, real quaternions and real octonions. Alternativity means that subalgebras consisting of two elements are associative. These results are collected under the name of the Generalized Frobenius Theorem. Finally, the only finite algebras which are normable are -- the reals, complexes, real quaternions and real octonions.

What these theorems teach us is that these are not merely arbitrary examples of algebras. They are very special algebras, which play several unique mathematical roles.

Several aspects of the quaternion and octonion multiplication tables are particularly convenient in the magician system framework. Perhaps the best example is the identity of additive and multiplicative inverses. The rule A-1 = -A (which applies to all non-identity elements) says that undoing (reversing) is the same as annihilation. Undoing yields the identity magician 1, which reproduces everything it comes into contact with. Annihilation yields zero, which leaves alone everything it comes into contact with. The ultimate action of the insertion of the inverse of A into a system containing A is thus to either to reproduce the system in question (if multiplication is done first), or to reproduce the next natural phase in the evolution of the system (if addition is done first).

Compare this to what happens in a magician system governed by an arbitrary algebra. Given a non-identity element A not obeying the rule A2 = -1, one has to distinguish whether its opposite is to act additively or multiplicatively. In the magician system framework, however, there is no easy way to make this decision: the opposites are simply released into the system, free to act both additively and multiplicatively. The conclusion is that only in extended imaginary algebras like the quaternions and octonions can one get a truly natural magician system negation.

Consciousness and Division Algebras

The conclusion to which the Generalized Frobenius Theorem leads us is a simple and striking one: the autopoietic systems which consciousness constructs are quaternionic and octonionic in structure. This is an abstract idea, which has been derived by abstract means, and some work will be needed to see what intuitive sense it makes. However, the reasoning underlying it is hopefully clear.

The notion of reversibility being used here may perhaps benefit from a comparison with the somewhat different notion involved in Edward Fredkin's (Fredkin and Toffoli, 1982) theory of reversible computation. Fredkin has shown that any kind of computation can be done in an entirely reversible way, so that computatinoal systems need not produce entropy. His strategy for doing this is to design reversible logic gates, which can then be strung together to produce any Boolean function, and thus simulate any Turing machine computation. These logic gates, however, operate on the basis of redundancy. That is, when one uses these gates to carry out an operation such as "A and B," enough information is stored to reconstruct both A and B, in addition to their conjunction.

The main difference between these reversible computers and the reversible magician systems being considered here is the property of closure. According to the psynet model, mental entities are autopoietic magician systems. In the simplest case of fixed point attractors, this implies that mental entities are algebraically closed magician systems: they do not lead outside themselves. Even in the case of periodic or strange attractors, there is still a kind of closure: there is a range of magicians which cannot be escaped. Finally, in the most realistic case of stochastic attractors, there is a range of magicians which is unlikely to be escaped. In each case, everything in the relevant range is producible by other elements in that same range: the system is self-producing. Fredkin's logic gates do not, and are not intended to, display this kind of property. It is not in any way necessary that each processing unit in a reversible computing system be producible by combinations of other processing units. The contrast with reversible computers emphasizes the nature of the argument that has led us to postulate a quaternionic/ octonionic structure for consciousness. The simple idea that consciousness is reversible does not lead you to any particular algebraic structure. One needs the idea of reversible conscious operations in connection with the psynet model, which states that mental systems are autopoietic subsystems of process dynamical systems. Putting these two pieces together leads immediately to the finite division algebras.

Next, a word should be said about the phenomenological validity of the present theory. The phenomenology of consciousness is peculiar and complex. But one thing that may be asserted is that consciousness combines a sense of momentary timelessness, of being "outside the flow of time," with a sense of change and flow. Any adequate theory of consciousness must explain both of these sensations. The current theory fulfills this criterion. Consciousness, it is argued, is concerned with constructing systems that lack a sense of irreversibility, that stand outside the flow of time. But in the course of constructing such systems, consciousness carries out irreversible processes. Thus there is actually an alternation between timelessness and time-boundedness. Both are part of the same story.

Finally, let us move from phenomenology to cognitive psychology. If one adopts the view given here, one is immediately led to the conclusion that the Generalized Frobenius Theorem solves an outstanding problem of theoretical psychology: it explains why consciousness is bounded. No previous theory of consciousness has given an adequate answer for the question: Why can't consciousness extend over the whole mind? But in the algebraic, psynet view, the answer is quite clear. There are no division algebras the size of the whole mind. The biggest one is the octonions. Therefore consciousness is limited to the size of the octonions: seven elements and their associated anti-elements.

The occurence of the number 7 here is striking, for, as is well-known, empirical evidence indicates that the capacity of human short-term memory is about 8. The figure is usually given as 7 +/- 2. Of course, 7 is a small number, which can be obtained in many different ways. But, nevertheless, the natural occurrence of the number 7 in the algebraic theory of consciousness is a valuable piece of evidence. In no sense was the number 7 arbitrarily put into the theory. It emerged as a consequence of deep mathematics, from the simple assumption that the contents of consciousness, at any given time, is a reversible, autopoietic magician system.

8.9 MODELLING STATES OF MIND

The quaternionic and octonionic algebras are structures which can be concretized in various ways. They describe a pattern of inter-combination, but they do not specify the things combined, nor the precise nature of the combinatory operation.

Thus, the way in which the algebraic structures are realized can be expected to depend upon the particular state of mind involved. For instance, a state of meditative introspection is quite different from a state of active memorization, which is in turn quite different from a state of engagement with sensory or motor activities. Each of these different states may involve different types of mental processes, which act on each other in different ways, and thus make use of the division algebra structure quite differently.

As the quaternions and octonions are very flexible structures, there will be many different ways in which they can be used to model any given state of mind. What will be presented here are some simple initial modelling attempts. The models constructed are extremely conceptually natural, but that is of course no guarantee of their correctness. The advantage of these models, however, is their concreteness. Compared to the general, abstract quaternion-octonion model, they are much more closely connected to daily experience, experimental psychology and neuroscience. They are, to use the Popperian term, "falsifiable," if not using present experimental techniques, then at least plausibly, in the near future. They also reveal interesting connections with ideas in the psychological and philosophical literature.

But perhaps the best way to understand the nature of the ideas in this section is to adopt a Lakatosian, rather than Popperian, philosophy of science. According to Lakatos, the "hard core" of a theory, which is too abstract and flexible to be directly testable, generates numerous, sometimes mutually contradictory "peripheral theories" which are particular enough to be tested. The hard core here is the quaternionic-octonionic theory of consciousness, clustered together with the psynet model and the concepts of thought-proprioception and reversible consciousness. The peripheral theories are applications of the algebras to particular states of consciousness.

An Octonionic Model of Short-Term Memory

Consider now a state of consciousness involving memorization or intellectual thought, which requires holding a number of entities in consciousness for a period of time, and observing their interactions. In this sort of state consciousness can be plausibly identified with what psychologists call "short-term memory" or "working memory."

The most natural model of short-term memory is one in which each of the entities held in consciousness is identified with one of the canonical basis elements {i,j,k,E,I,J,K}. The algebraic operation * is to be interpreted as merely a sequencing operation. While all the entities held in consciousness are in a sense "focuses of attention," in general some of the entities will be focussed on more intensely than others; and usually one entity will be the primary focus. The equation A * B = C means that a primary focus on A, followed by a primary focus on B, will be followed by a primary focus on C.

This model of short-term memory that implies the presence of particular regularities in the flow of the focus of consciousness. The nature of these regularities will depend on the particular way the entities being stored in short-term memory are mapped onto the octonions. For instance, suppose one wants to remember the sequence

PIG, COW, DOG, ELEPHANT, FOX, WOMBAT, BILBY

Then, according to the model, these elements must be assigned in a one-to-one manner to the elements of some basis of the octonion algebra. This need not be the canonical basis introduced above, but for purposes of illustration, let us assume that it is. For convenience, let us furthermore assume that the identification is done in linear order according to the above list, so that we have

pig, cow, dog, elephant, fox, wombat, bilby

i j k E I J K

Then the theory predicts that, having focussed on PIG and then COW, one will next focus on DOG. On the other hand, having focussed on DOG, and then on the absence or negative of COW, one will next focus on PIG.

Of course, the order of focus will be different if the mapping of memory items onto basis elements is different. There are many different bases for the octonions, and for each basis there are many possible ways to map seven items onto the seven basis elements. But nevertheless, under any one of these many mappings, there would be a particular order to the way the focus shifted from one element to the other.

This observation leads to a possible method of testing the octonionic model of short-term memory. The model would be falsified if it were shown that the movement from one focus to another in short-term memory were totally free and unconstrained, with no restrictions whatsoever. This would not falsify the general division algebra model of consciousness, which might be applied to short-term memory in many different ways; but it would necessitate the development of a more intricate connection between octonions and short-term memory.

This octonionic model of short-term memory may be understood in terms of chaos theory -- an interpretation which, one suspects, may be useful for empirical testing. Suppose one has a dynamical system representing short-term memory, e.g. a certain region of the brain, or a family of neural circuits. The dynamics of this system can be expected to have a "multi-lobed" attractor similar to that found by Freeman in his famous studies of olfactory perception in the rabbit. Each lobe of the attractor will correspond to one of the elements stored in memory. The question raised by the present model is then one of the second-order transition probabilities between attractor lobes. If these probabilities are all equal then the simple octonionic model suggested here is falsified. On the other hand, if the probabilities are biased in a way similar to one of the many possible octonion multiplication tables, then one has found a valuable piece of evidence in favor of the present model of short-term memory and, indirectly, in favor of the finite division algebra theory of consciousness. This experiment cannot be done at present, because of the relatively primitive state of EEG, ERP and brain scan technology. However, it is certainly a plausible experiment, and there is little doubt that it will be carried out at some point over the next few decades.

Learning Triads

The previous model dealt only with the holding of items in memory. But what about the active processing of elements in consciousness? What, for example, about states of consciousness which are focussed on learning motor skills, or on exploring the physical or social environment?

To deal with these states we must return to the notion of a "learning triad," introduced above as a bridge between the psynet model and Bohm's reflex-oriented psychology. The

first step is to ask: how might we express the logic of the learning triad algebraically? We will explore this question on an intuitive basis, and then go back and introduce definitions making our insights precise.

First of all, one might write

THOUGHT = PERCEPTION * ACTION

ACTION = THOUGHT * PERCEPTION

PERCEPTION = ACTION * THOUGHT

These equations merely indicate that a perception of an action leads to a revised thought, a thought about a perception leads to an action, and an action based on a thought leads to a perception. They express in equations what the triad diagram itself says.

The learning triad is consistent with the quaternions. It is consistent, for example, with the hypothetical identification

PERCEPTION THOUGHT ACTION

i j k

But the three rules given above do not account for much of the quaternion structure. In order to see how more of the structure comes out in the context of learning triads, we must take a rather unexpected step. We must ask: How does the activity of the learning triad relate to standard problem-solving techniques in learning theory and artificial intelligence?

Obviously, the learning triad is based on a complex-systems view of learning, rather than a traditional, procedural view. But on careful reflection, the two approaches are not so different as they might seem. The learning triad is actually rather similar to a simple top-down tree search. One begins from the top node, the initial thought. One tests the initial thought and then modifies it in a certain way, giving another thought, which may be viewed as a "child" of the initial thought. Then, around the loop again, to a child of the child. Each time one is modifying what came before -- moving down the tree of possibilities.

Viewed in this way, however, the learning triad is revealed to be a relatively weak learning algorithm. There is no provision for "backtracking" -- for going back up a node, retreating from a sequence of modifications that has not borne sufficient fruit. In order to backtrack, one would like to actually erase the previous modification to the thought process, and look for another child node, an alternative to the child node already selected.

An elegant way to view backtracking is as going around the loop the wrong way. In backtracking, one is asking, e.g.: What is the thought that gave rise to this action? Or, what is the action that gave rise to this percept? Or, what is the percept that gave rise to this thought? In algebraic language, one is asking questions that might be framed

THOUGHT * ACTION = ?

ACTION * PERCEPT = ?

PERCEPT * THOUGHT = ?

In going backwards in time, while carrying out the backtracking method, what one intends to do is to wipe out the record of the abandoned search path. One wants to eliminate the thought-process modifications that were chosen based on the percept; one wants to eliminate the actions based on these thought modifications; and one wants to eliminate the new percept that was formed in this way. Thus, speaking schematically, one wants to meet PERCEPT with -PERCEPT, ACTION with -ACTION and THOUGHT with -THOUGHT. Having annihilated all that was caused by the abandoned choice, one has returned to the node one higher in the search tree. The natural algebraic rules for backtracking are thus:

THOUGHT * ACTION = - PERCEPT

ACTION * PERCEPT = - THOUGHT

PERCEPT * THOUGHT = - ACTION

Backtracking is effected by a backwards movement around the learning triad, which eliminates everything that was just laid down.

The view of learning one obtains is then one of repeated forward cycles, interspersed with occasional backward cycles, whenever the overall results of the triad are not satisfactory. The algebraic rules corresponding to this learning method are consistent with the quaternion multiplication table. The division-algebra structure of consciousness is in this way seen to support adaptive learning.

In this view, the reason for the peculiar power of conscious reasoning becomes quite clear. Consciousness is sequential, while unconscious thought is largely parallel. Consciousness deals with a small number of items, while unconscious thought is massive, teeming, statistical. But the value of conscious thought is that it is entirely self-aware, and hence it is reversible. And the cognitive value of reversibility is that it allows backtracking: it allows explicit retraction of past thoughts, actions and perceptions, and setting down new paths.

In the non-reversible systems that dominate the unconscious, once something is provisionally assumed, it is there already and there is no taking it back (not thoroughly at any rate). In the reversible world of consciousness, one may assume something tentatively, set it aside and reason about it, and then retract it if a problem occurs, moving on to another possibility. This is the key to logical reasoning, as opposed to the purely intuitive, habit-based reasoning of the unconscious.

Thought Categories and Algebraic Elements

We have posited an intuitive identification of mental process categories with quaternionic vectors. It is not difficult to make this identification rigorous, by introducing a magician system set algebra based on the magician system algebra given above.

To introduce this new kind of algebra, let us stick with the current example. Suppose one has a set of magicians corresponding to perceptual processes, a set corresponding to thought processes, and a set corresponding to action processess. These sets are to be called PERCEPTION, THOUGHT and ACTION. The schematic equations given above are then to be interpreted as set equations. For instance, the equation

PERCEPTION * THOUGHT = ACTION

means that:

1) for any magicians P and T in the sets PERCEPTION and THOUGHT respectively, the product P*T will be in the set ACTION

2) for any magician A in the set ACTION, there are magicains P and T in the sets PERCEPTION and THOUGHT respectively so that P*T=A

"Anti-sets" such as -PERCEPTION are defined in the obvious way: e.g. -PERCEPTION is the class of all elements R so that P = -R for some element P in PERCEPTION.

In general, suppose one has a collection of subsets S (S1,...,Sk) of a magician system M. This collection may or may not naturally define a set algebra. In general, the products of elements in Si and Sj will fall into a number of different classes Sm, or perhaps not into any of these classes at all. One may always define a probabilistically weighted set algebra, however, in which different equations hold with different weights. One way to do this is to say that the tagged equation

Si * Sj = Sk pijk

holds, with

pijk = qijka rijk2-a

where qijk is the probability that, if one chooses a random element from Si, and combines it with a random element from Sj, one will obtain an element from Sk is qijk; and rijk is the probability that a randomly chosen element from Sk can be produced by combining some element of Si with some element of Sj.

It is easier to deal with straightforward set algebras than their probabilistic counterparts. In real psychological systems, however, it is unlikely that an equation such as

PERCEPTION * THOUGHT = ACTION

could hold strictly. Rather, it might be expected to hold probabilistically with an high probability (making allowances for stray neural connections, etc.).

Finally, suppose one has a collection of subsets S and a corresponding set algebra. One may then define the

relative unity of this set algebra, as the set 1S of all elements U with the property that U * Si is contained in Si for all i. The relative unity may have an anti-set, which will be denoted -1S. These definitions provide a rigorous formulation of the correspondence between thoughts, perceptions and actions and quaternionic vectors, as proposed in the previous section.

Note that the set algebra formalism applies, without modification, to stochastic magician systems, i.e. to the case where the same magician product A*B may lead to a number of different possible outcomes on different trials.

Octonions and Second-Order Learning

Quaternions correspond to adaptive learning; to learning triads and backtracking. The octonionic algebra represents a step beyond adaptive learning, to what might be called "second-order learning," or learning about learning. The new element E, as it turns out, is most easily interpreted as a kind of second-order monitoring process, or "inner eye." Thus, in the linear combinations q+Er, the elements q are elementary mental processes, and the elements Er are mental processes which result from inner observation of other mental processes.

The three quaternionic elements i, j and k are mutually interchangeable. The additional octonionic element E, however, has a distinguished role in the canonical octonionic multiplication table. It leads to three further elements, I=ie, J=je and K=ke, which are themselve mutually interchangeable. The special role of E means that, in terms of learning triads, there is really only one natural interpretation of the role of E, which is given by the following:

PERCEPTION	       THOUGHT	        ACTION

i j k

INNER EYE PERCEPTION' THOUGHT' ACTION'

E I J K

The meaning of this correspondence is revealed, first of all, by the observation that the systems of elements

(1,i,E,I),  (1,J,E,J),  (1,k,E,K)

are canonical bases of the subalgebras isomorphic to the quaternions generated by each of them. These quaternionic subalgebras correspond to learning triads of the form

	             INNER EYE 

/ \

PERCEPTION -------- PERCEPTION'

INNER EYE

/ \

THOUGHT --------- THOUGHT'

INNER EYE

/ \

ACTION ----------- ACTION'

The element E, which I have called INNER EYE, can thus act as a kind of second-order thought. It is thought which treats all the elements of the first-order learning triad as percepts: thought which perceives first-order perception, thought and action, and produces modifies processes based on these perceptions. These actions that second-order thought produces may then enter into the pool of consciously active processes and interact freely. In particular, the new perception, thought and action processes created by the inner eye may enter into the following learning triads:

		 THOUGHT'

/ \

PERCEPTION' -- ACTION

THOUGHT

/ \

PERCEPTION' -- ACTION'

THOUGHT'

/ \

PERCEPTION -- ACTION'

These are genuine, reversible learning triads, because the sets (1,i,K,J), (1,I,j,K), and (1,I,J,k) are canonical bases for the subalgebras isomorphic to the quaternions which they generate. These, together with the basic learning triad and the three triads involving the INNER EYE, given above, represent the only seven quaternionic subalgebras contained within the octonions.

It is well worth noting that there is no loop of the form

		 THOUGHT'

/ \

PERCEPTION' -- ACTION'

within the octonion algebra. That is, complete substitution of the results of the inner eye's modifications for the elements of the original learning triad is not a reversible operation. One can substitute any two of the modified versions at a time, retaining one of the old versions, and still retain reversibility. Thus a complete round of second-order learning is not quite possible within the octonion algebra. However, one can attain a good approximation.

And, as an aside, the actual behavior of this "complete substitution" loop {I,J,K} is worth reflecting on for a moment. Note that IJ = -k, JK = -i, KI = -j. Traversing the complete substitution loop forward, one produces the anti-elements needed for backtracking in the original learning triad. Thus the INNER EYE has the potential to lead to backtracking, while at the same time leading to new, improved learning triads. There is a great deal of subtlety going on here, which will only be uncovered by deep reflection and extensive experimentation.

In order to completely incorporate the results of the inner eye's observations, one needs to transcend the boundaries of reversible processing, and put the new results (I,J,K) in the place of the old elementary learning triad (i,j,k). Having done this, and placed (I,J,K) in the role of the perceptual-cognitive-active loop, the octonionic process can begin again, and construct a new collection of modified processes.

What is permitted by the three triads generated by (1,i,K,J), (1,I,j,K), and (1,I,J,k) is a kind of "uncommitted" second-order learning. One is incorporating the observations of the inner eye, but without giving up the old way of doing things. The results of these new, uncommitted triads cannot be externally observed until a committment has been made; but the new triads can be used and progressively replaced, while the observing-eye process goes on.

8.10 MIND AS PREGEOMETRY

In The Structure of Intelligence I discussed the quantum theory of consciousness -- meaning the theory that the act of consciousness is synonymous with the collapse of the wave packet in a quantum system. I viewed this as a viable alternative to the theory that consciousness is entirely deterministic and mechanistic. However, I realized and stated the limitations of the "quantum theory of consciousness" approach. Merely to link awareness and physics together on the level of nomenclature is not good enough. There has to be a more fundamental connection, a connection that makes a difference for both physics and psycholoyg.

Toward the end of Chaotic Logic I tried to push the same idea in a different direction. I discussed Feynman path integrals, the formalism according to which, in quantum theory, the amplitude (the square root of the probability) of a particle going from point A to point B is given by the sum of the amplitudes of all possible paths from A to B. Certain paths, I proposed, were algorithmically simpler than others -- and hence psychologically more likely. Perhaps the sum over all paths should be weighted by algorithmic simplicity, with respect to a given mind. Perhaps this would solve the renormalization problem, the famous "infinities" plaguing particle physics. I tried to solve the mathematics associated with this hypothesis, but in vain -- it just remained an abstract speculation.

That particular section of Chaotic Logic attracted no attention whatsoever. However, a number of readers of the book did remark to me that the magician system model reminded them of particle physics. This parallel did not occur to me at the time I conceived the magician system model -- perhaps because at the time I was reading a chemist (George Kampis) and collaborating with an algebraist (Harold Bowman) -- but in fact it is rather obvious. The combination of two magicians to form a third is similar to what happens when two particles collide and produce another particle. Antimagicians are like antiparticles. In addition to the common tool of abstract algebra, there is a strong conceptual similarity.

This idea remained vague for a while but finally began to come together when I began communicating with F. Tony Smith, a physicist at Georgia Tech. Tony has developed an original, elegant and intriguing theory of fundamental particle physics, which is based entirely on finite algebras acting on discrete spaces (Smith, 1996). The basis of all his algebras is the octonionic division algebra -- the octonions are "unfolded" to yield Clifford algebras and Lie algebras, which give the fundamental symmetry groups of all the different kinds of particles. Tony's theory was based on discrete algebraic entities living on the nodes of a graph (an eight-dimensional lattice)-- it was, in fact, a graphical magician system!

The correctness or incorrectness of this particular physics theory is not the point -- the point is the placing of physical and psychological models on a common mathematical ground. If both can reasonably be viewed in terms of discrete algebras living on graphs, then how difficult can it be to understand the relation between the two? This is a subject of intense current interest to me; the present section merely describes a few thoughts in this direction. While still somewhat rough-shod, I feel that these ideas indicate the kind of direction that fundamental physics must take, if it is to go beyond the mathematical confusions in which it is currently mired. The connection between the conscious mind and the physical world is there, it is active, and it cannot be denied forever.

Approaches to Quantum Measurement

I will begin by reviewing the puzzle of quantum measurement -- a puzzle that has drawn the attention of a great number of physicists, beginning with the foundation of quantum physics and continuing to the present day. I will not attempt a systematic review, but will only mention a few of the best-known theories, and then turn to the more radical thought of John Wheeler, which will serve as a springboard for the main ideas of the section.

The key fact here is that a quantum system does not have a definite state: it lives in a kind of probabilistic superposition of different states. Yet somehow, when we measure it, it assumes a definite state.

The standard "Copenhagen interpretation" of quantum measurement states that, when a measurement is made, a quantum system suddenly collapses from a probabilistic superposition of states into a definite state. This nonlinear "collapse of the wave function" is an additional assumption, which sits rather oddly with the linear evolution equations, but poses no fundamental inconsistency. The act of measurement is defined somewhat vaguely, as "registration on some macroscopic measuring device." The fact that macroscopic measuring devices are themselves quantum systems is sidestepped.

London and Bauer (XX), Wigner (XX), Goswami (XX), Goertzel (XX) and others have altered the Copenhagen interpretation by replacing the macroscopic measuring instrument with consciousness itself. According to this view, it doesn't matter whether the photoelectric emulsion blackens, it matters whether the blackening of the emulsion enters someone's consciousness. The weak point here, of course, is that consciousness itself is not a well-defined entity. One is replacing on ill-defined entity, measurement, with another.

The interpretation of the measurement process that comes closest to a direct translation of the mathematical formalism of quantum theory is the many-universes theory, which states, that every time a measurement is made, universes in which the measurement came out one way are differentiated from universes in which it came out another way. This interpretation does not add an extra step to quantum dynamics, nor does it appeal to extra-physical entities. It is, in David Deutsch's phrase, short on assumptions but long on universes. The ontological status of the alternate universes is not quite clear; nor is it clear when a measurement, which splits off universes from each other, should be judged to have occured.

John Wheeler has taken a somewhat different view of the problems of quantum physics, one which is extremely relevant to the ideas of the present paper. While acknowledging the elegance and appeal of the many-universes theory, he rejects it because

[T]he Everett interpretation takes quantum theory in its present form as the currency, in terms of which everything has to be explained or understood, leaving the act of observation as a mere secondary phenomenon. In my view we need to find a different outlook in which the primary concept is to make meaning out of observation and, from that derive the formalism of quantum theory.

Quantum physics, in Wheeler's view, has uncovered the fundamental role of the observer in the physical world, but has not done it justice. The next physics breakthrough will go one step further, and place the observer at the center.

Wheeler also believes that the equations of quantum theory will ultimately be seen to be statistical in nature, similar to the equations of thermodynamics:

I believe that [physical] events go together in a higgledy-piggledy fashion and that what seem to be precise equations emerge in every case in a statistical way from the physical of large numbers; quantum physics in particular seems to work like that.

The nature of this statistical emergence is not specified; except that, preferably, one would want to see physical concepts arise out of some kind of non-physical substrate:

If we're ever going to find an element of nature that explains space and time, we surely have to find something that is deeper than space and time -- something that itself has no localization in space and time. The ... elementary quantum phenomenon ... is indeed something of a pure knowledge-theoretical character, an atom of information which has no localization in between the point of entry and the point of registration.

This hypothetical non-physical substrate, Wheeler has called "pregeometry." At one point, he explored the idea of using propositional logic as pregeometry, of somehow getting space and time to emerge from the statistics of large numbers of complex logical propositions (Wheeler, 19XX). However, this idea did not bear fruit.

Wheeler also suspects the quantum phenomenon to have something to do with the social construction of meaning:

I try to put [Bohr's] point of view in this statement: 'No elementary quantum phenomenon is a phenomenon until it's brought to a close by an irreversible act of amplification by a detection such as the click of a geiger counter or the blackening of a grain of photographic emulsion.' This, as Bohr puts it, amounts to something that one person can speak about to another in plain language....

Wheeler divides the act of observation into two phases: first the bringing to close of the elementary quantum phenomenon; and then the construction of meaning based on this phenomenon. The accomplishment of the first phase, he suggests, seems to depend on the possibility of the second, but not the actual accomplishment of the second. A quantum phenomenon, he suggests, is not a phenomenon until it is potentially meaningful. But if, say, the photoelectric emulsion is destroyed by a fire before anyone makes use of it, then the elementary quantum phenomenon was still registered. This is different from the view that the quantum phenomenon must actually enter some entity's consciousness in order to become a phenomenon.

Regarding the second phase of observation, the construction of meaning, Wheeler cites the Norwegian philosopher Follesdal that "meaning is 'the joint product of all the evidence that is available to those who communicate.'" Somehow, a quantum phenomenon becomes a phenomenon when it becomes evidence that is in principle available for communication.

In the end, Wheeler does not provide an alternative to the standard quantum theories of measurement. Instead, he provides a collection of deeply-conceived and radical ideas, which fit together as elegantly as those of any professional philosopher, and which should be profoundly thought-provoking to anyone concerned with the future of physical theory.

The Relativity of Reality

A particularly subtle twist to the puzzle of quantum measurement has been given in Rossler (19XX). Rossler describes the following experiment. Take two particles -- say, photons. As in the Aspect experiment (19XX), shoot them away from each other, in different directions. Then, measure the photons with two different measuring devices, in two different places. According to special relativity, simultaneity is not objective. So, suppose device 1 is moving with respect to device 2, and event A appears to device 1 to occur before event B. Then it is nonetheless possible that to device 2, event B should appear to occur before event A.

Then, in the Aspect experiment, suppose that device 1 measures photon A, and device 2 measures photon B. One may set the devices up so that, in the reference frame of device 1 photon A is measured first, but in the reference frame of device 2 photon B is measured first. In the reference frame of each of the measuring devices, one has precisely the Aspect experiment. But one also has a problem. The notion of a measured state must be redefined so that it becomes reference-frame-dependent. One cannot say in any "objective" way whether a given probabilistic superposition of states has collapsed or not; one can only say whether or not it has collapsed within some particular reference frame.

Aharanov and Albert (19XX), discussing a similar experiment, conclude that the notion of a single time axis is inadequate for describing physical reality. They conclude that each observer must be considered to have its own "personal" time axis, along which probabilistic superpositions gradually collapse into greater definiteness.

And if time is multiple, what about space? The verdict here is not quite so clear, but on balance, it seems probable that we will ultimately need subjective spaces to go along with subject time axes. This idea comes from the study of quantum theory in curved spacetime (19XX). In this theory, one arrives at the remarkable conclusion that, if two observers are moving relative to one another, and both look at the same exact spot, one observer may see a particle there while the other does not. This means that, in curved spacetime, particles have no real existence. Of course, it is difficult to interpret this result, since quantum physics in curved spacetime is not a genuine physical theory, just a sophisticated mathematical cut-and-paste job. But nonetheless, the result is highly suggestive.

These results suggest new troubles for the quantum theory of measurement -- troubles beyond the mere collapse of the wave packet, splitting of universes, etc. In the many-universes picture, instead of the whole human race living in a universe which periodically splits off into many, one must think of each observer as living in his own individual universe, which splits up periodically. These ideas sit rather oddly with Wheeler's concept of observation as having to do with the communication and the collective construction of meaning. Suppose something is only measurable when it can, potentially be communicated throughout a community of minds. One must now ask: in whose universe do these community of minds actually exist?

Minds and Physical Systems

The next step is to tie these physical ideas in with the psynet model. But first, before we can do this, we must remind ourselves how the abstract process systems posited in the psynet model relate to physical systems.

This question might seem to be a restatement of the basic puzzle of the relation between mind and reality. For the moment, however, we are dealing with something simpler. We are merely dealing with the question of how one type of dynamical system (magician systems) deals with another type of dynamical system (physical systems). Or, to phrase it more accurately, we are dealing with the question of how one level of dynamical system relates to another level of dynamical system. For, after all, it is quite possible that physical systems themselves are magician systems -- as we shall see in the following section, there are models of particle physics which say just this.

Following previous publications, I will take a pragmatic approach to the relation between minds and brains. Invoking the theory of algorithmic information, we will assert that minds are minimal algorithmic patterns in brains. A magician system is a minimal pattern in a physical system P if:

1) the magician system is more "simply given" than the brain

2) the structure of the magician system, at a given time and in evolution over time, is similar to the structure of the system P

3) no part of the magician system can be removed without making

the magician system less structurally similar to the

system P

These two criteria may be quantified in obvious ways. A mind associated with a physical system P is a magician system which is a pattern in the system P, in this sense.

This approach intentionally sidesteps the question of the basic nature of awareness. In this respect, it is perhaps wise to reiterate the distinction between raw awareness and consciousness. Awareness, considered as the basic sense of existence or presence in the world, is not analyzed here, and is perhaps not analyzable. Consciousness, however, is considered as awareness mediated by mental structures. Different states of consciousness involve different mental structures. Thus, mind affects consciousness, but does not necessarily affect awareness.

In terms of quantum measurement, we may say that an elementary quantum phenomenon becomes a phenomenon, from the point of view of a given mind, when it enters that mind. Thus, an elementary quantum phenomenon becomes a phenomenon to a given mind when it becomes a part of a minimal algorithmic pattern in the system supporting that mind. Taking an animist view, one may say that, when a quantum phenomenon registers on a photoelectric emulsion, it becomes a phenomenon from the point of view of that emulsion. It becomes a part of the "mind" of that emulsion. On the other hand, when a person views the emulsion, the quantum phenomenon becomes a phenomenon from the point of view of that person.

This subjectivistic view may seem problematic to some. Given the thought-experiments described above, however, it seems the only possible course. Psychologists have long understood that each mind has its own subjective reality; quantum physics merely extends this subjectivity in a different way. It shows that physical phenomena, considered in a purely physical sense, do not have meaning except in the context of someone's, or something's, subjective reality.

The Mental-Physical Optimality Principle

Now let us get down to business. Recall Wheeler's assertion that quantum mechanics should arise from the statistics of some underlying, pregeometric domain. He selected propositional logic as a candidate for such a "pregeometry." From a psychological point of view, however, propositional logic is simply a crude, severely incomplete model of the process of intelligent thought (see SI and CL). It represents an isolation of one mental faculty, deductive logic, from its natural mental environment. Instead of propositional logic, I propose, the correct pregeometry is mind itself. Physics, I propose, results from the statistics of a large number of minds.

Suppose that, as the psynet model suggests, minds can be expressed as hypercomplex algebras, and that mental process can be understood as a nonlinear iteration on hypercomplex algebras. In this context, we can raise the question "How might a collection of minds give rise to a physical reality?" in a newly rigorous way. The question becomes: How might a collection of hypercomplex algebras, and the attractors and meta-attractors within these algebras under an appropriate nonlinear dynamic, give rise to a physical reality?

Given that minds themselves are understood as patterns in physical systems, what is being proposed here must be seen as a kind of circular causation. Physical reality gives rise to physical systems, which give rise to minds, which in turn combine to produce physical reality. We know how physical reality gives rise to minds -- at least, to structured systems, which interact with awareness to form states of consciousness. We do not know how minds give rise to physical reality.

My answer to this question is a simple one: Minds sum up to form physical reality. This sum takes place in an abstract space which may be conceptualized as a space of abstract algebras.

The nature of this summation may be understood by analogy to Feynman integrals in quantum physics. In the simplest example, a Feynman integral is a sum over all possible paths from one point to another. One assigns each path a two-dimensional vector, the angular coordinate of which is given by the energy of the path, divided by the normalized Planck's constant. Then one adds up these vectors, and divides the sum by an overall normalization factor. The sum is the amplitude (the square root of the probability) of the particle in question going from the first point to the second. The key is that nearly all of the paths cancel out with each other. The vectors are pointing in a huge number of different directions, so that ultimately, the only vectors that make a significant contributions are the ones that are bunched together with a lot of other vectors, i.e., the ones that are near local extrema of the energy function. The amplitude of the transition is thus approximately given by the local extrema of the energy function. As Planck's constant tends to zero, the cancellation of alternative paths becomes complete, and the "optimal" path is entirely dominant. As Planck's constant is increased, the cancellation of the alternative paths increasingly fails, and there is more of a "random" flavor to the assignment of amplitudes.

When adding together minds, we are not working in a two-dimensional complex space, but rather in an n-dimensional space of abstract algebras. Here the dimension n is large, and will vary based upon the number and size of minds involved -- one could speak of an infinite-dimensional Hilbert space, with the understanding that only a finite number of dimensions will be involved in any particular calculation. Despite the increase in dimensionality, however, the principle is the same as with Feynman integrals. We are adding together a huge variety of n-vectors, which are spreading out in all different directions. Mainly, these vectors will cancel each other out. But certain substructures common to a great number of the algebras will remain. These substructures will be ones that are "optimally efficient" in the sense that a significant percentage of the minds in the universe have evolved so as to possess them. My claim is that these optimal substructures are precisely physical reality. This is what I call the Mental-Physical Optimality Hypothesis.

This, finally, is the basic concept of my reduction of physics to psychology: that physical structures are precisely the most efficient and hence common structures of mind, so that when one sums together a vast number of minds, it is the precisely the physical structures which are not cancelled out.

Note that, in this theory, the process of summation does not eliminate the summands. Individual minds maintain their individual existence; but at the same time, they sum together to produce the collective physical reality, from which they all emerge. Mind and physical reality create each other, on a recurrent and ceaseless basis.

One might wonder whether, once mind has gone ahead and created a new physical reality, the old one entirely disappears. But this is a philosophical extravagance. One might as well suppose that physical entities have a certain lasting existence, a certain tendency to persist, and that they fade from existence only if their component parts are not re-created after a certain amount of time has lapsed. In this way one retains the fundamental continuity and consistency of the universe, and also the idea that mind and reality create each other. One has a kind of "illusory continuity" emerging from an underlying discrete dynamic of mind-reality intercreation. The kind of "tendency to persist" being invoked here has a venerable history in philosophy, for instance in the work of Charles S. Peirce (19XX).

The Efficiency of Finite Division Algebras

If one accepts a discrete theory of physics such as that of (Smith, 1996), then physical systems and psychological systems live in the same space, so that minds can indeed be summed together to yield physical realities. In order to make this abstract idea convincing, however, we must give some concrete reason to believe that the specific algebraic structures involved in thought have something to do with the specific algebraic structures involved in the physical structure of the universe. This might be done in any number of different ways: the general mental-physical optimality principle, and the concept of mind as pregeometry, do not rely on any particular algebraic structure. At present, however, only one particular psychological-physical correspondence has presented itself to us, and this is the octonionic algebra. The octonions appear in the magician system of model of consciousness, and also in the discrete theory of physics. In both cases they represent the same intuitive phenomenon: the structure of the present moment.

One might well wonder why such a correspondence should exist. Why should physically important structures also be psychologically important structures? The answer to this question lies, we believe, in mathematics: Certain mathematical structures possess an intrinsic efficiency, which causes them to arise time and time again in different circumstances.

In this case, the efficient algebraic structure in question is the octonionic algebra. The reason for the importance of the octonions is very simple: There is a theorem which states that the only finite-dimensional division algebras are one, two, four and eight-dimensional. Among these, the only algebras with reasonable algebraic properties are the real numbers, the complex numbers, the quaternions and the octonions. The octonions are the largest reasonably structured algebra with the property of unique division. And the property of unique division is, one suspects, crucial for efficient functioning, both in physics and in psychology.

This is, as yet, not a scientific idea. It is a philosophical idea which takes mathematical form. I have not found any way to put the idea to empirical test. It should be remembered, however, that many widely accepted notions in particle physics are equally distant from the realm of empirical test. String theory is the best known example. In fundamental physics, as in theoretical cognitive science, experimentation is difficult, so that conceptual coherence and fecundity emerge as crucial criteria for the assessment of theories. I believe that, judged by these standards, the "mind as pregeometry" idea excels.

8.11 CONCLUSION

Richard Feynman said, "Whoever tries to understand quantum theory, vanishes into a black hole and will never be heard from again." The same might be said about consciousness -- and doubly for those who try to understand the relation between quantum theory and consciousness! The phenomenon of consciousness, like the elementary quantum phenomenon, displays layer after layer of complexity: one can keep unfolding it forever, always coming to new surprises, and always dancing around the raw, ineffable core.

The treatment of consciousness given here surpasses the one I gave in Chaotic Logic; and my own research on consciousness has extended significantly beyond the ideas of this chapter. But yet even this more extensive research is still in many ways incomplete. What I hope to have accomplished here, if nothing else, is to have illustrated some of the ways in which consciousness can be investigated in the context of the psynet model.

The concept of the dual network leads to the perceptual-cognitive loop, which is a useful abstraction of diverse neurobiological data. The concept of mental systems as magician systems leads to the division algebra theory of consciousness, which again provides a useful way of synthesizing different psychological observation. Division algebras connect many seemingly disparate things: Bohm's proprioception of thought, the magic number 7 +/- 2, the relation between mind and quantum reality,.... The psynet model does not, in itself, solve any of the puzzles of consciousness. But it does provide a useful and productive framework for thinking about the various issues involved.