DynaPsych Table of Contents

Evolutionary Quantum Computation:


Its Role in the Brain,
Its Realization in Electronic Hardware,
and Its Implications for the Panpsychic
Theory of Consciousness



Ben Goertzel
Computer Science Department
College of Staten Island
and
IntelliGenesis Corp.


Copyright Ben Goertzel 1997
Abstract: An evolutionary quantum computer (EQC) is a physical system that maintains an internal ensemble of macroscopic "quantum subsystems" manifesting significant quantum indeterminacy, with the property that the ensemble of quantum subsystems is continually changing in such a way as to optimize some measure of the emergent patterns between the system and its environment. It seems probable that the brain is an EQC, and that electronic EQC dissimilar to the brain can also be constructed; a speculative design in this regard is described, called QELA (Quantum Evolving Logic Array), involving Superconducting Quantum Interference Devices interfacing with re-configurable Field Programmable Logic Arrays. EQC has interesting implications for a quantum panpsychic view of consciousness: it provides an explanation of why, if everything is conscious to some extent, the human brain is so much more conscious than most other systems. The explanation is that, via EQC, the brain is able to maintain significant quantum randomness ("raw awareness") in a way that is correlated with its structure and behavior. Only EQC provides this kind of correlation, because only EQC allows uncollapsed quantum systems to interact significantly with the wave-function-collapsed, classical everyday world. In many- worlds-terms, EQC allows systems with a broad span over the range of possible universes to interact significantly with systems existing in narrow regions of universe-space.
Introduction

The concept of quantum computing (QC) is subtle and intriguing. By using the special nonlocal properties of quantum phenomena, it seems, one can potentially compute things more efficiently than is possible on ordinary digital computers (Deutsch, 1985). There is mounting evidence that the brain itself shows significant quantum effects, and should be modeled as a QC rather than as a standard Turing machine. Since many thinkers have equated consciousness with the quantum-theoretic "collapse of the wave function," the notion of the brain as a QC has obvious implications for consciousness.

Here I will approach these issues from an original angle, introducing complex systems ideas into the "quantum brain" discussion via the notion of Evolutionary Quantum Computing (EQC): quantum computing which proceeds in the manner of genetic algorithms, permitting the "breeding" of quantum computational systems that have significant impact on the classical physical world while remaining macroscopic quantum systems in uncollapsed form. EQC, I will argue, is the correct model of quantum computing in the brain, and is a viable design strategy for building electronic quantum computers today. Furthermore, it has radical implications for the theory of consciousness. It provides, for the first time, an explanation of why, if every entity in the universe is conscious as "quantum panpsychism" suggests, it should be the case that some entities (such as brains) are so much more intensely conscious than others (such as rocks).

Appreciation of quantum computing requires some understanding of the peculiar nature of quantum measurement (Wheeler and Zurek, 1978). A quantum system exists in a probabilistic superposition of states rather than a single definite state; in the many-universes interpretation, a quantum system is thought of as existing in a number of parallel universes, one for each possible state. Making an observation of the system "collapses" the system to one possible state, or universe. The promise of quantum computing is that, while a system is uncollapsed, it can carry out more computing than a collapsed system, because, in a sense, it is computing in an infinite number of universes at once.

One can prove that the "worst-case" behavior of a quantum computer program cannot exceed that of an ordinary computer program. However, the "average- case" behavior is a different story: on average, quantum computers are in principle capable of vastly outperforming ordinary digital computers on a great number of important problem-solving tasks, including pattern-recognition and code- cracking (Deutsch, 1985).

This is the theory. The practice, as yet, is essentially nonexistent, and is likely to remain so for quite some time. The trouble is that QC as it is currently being conceived is an extremely farfetched and difficult proposition. The present consensus QC paradigm is based on coaxing systems of particles to behave like digital computers, which is an onerous task. Lee Kent Hempfling's (1997) "neutronics technology" is the only existing alternative, but while Hempfling's approach is interesting, it is highly questionable on a number of grounds, being based on a highly nonstandard interpretation of quantum theory.

What I will describe here is a very different approach to quantum computation, inspired by evolutionary computing rather than ordinary digital computing. This new approach, which I call Evolutionary Quantum Computing (EQC), will be shown to be a reasonable hypothesis as to the nature of quantum computing in the brain, and also a plausible design for non-brainlike electronic quantum computers. And, more surprisingly, it will be shown to have dramatic implications for the theory of consciousness known as "quantum panpsychism" (Herbert, 1994).

Quantum panpsychism holds that consciousness emanates from quantum uncertainty, so that all entities in the universe are conscious. But it does not address the issue of degrees of consciousess -- e.g. of whether or why I am more conscious than the couch I am sitting on as I type. Here I will show that EQC provides a good explanation of why brains are so acutely conscious. I will define "systemic consciousness" as, roughly, the degree to which raw quantum consciousness aids in a system's practical operation; and I will argue that EQC is essentially the only means by which it is possible for systems to achieve high systemic consciousness.

But what is EQC all about? The idea is a very, very simple one. Instead of programming a quantum computer, set up an ensemble of quantum computers, and allow them to evolve. Create criteria for judging QC's, and then, in the manner of natural selection, allow successful QC's to survive and (probabilistically) mutate and combine to form new candidate QC's, whereas unsuccessful QC's perish. The result is that one has quantum computers fulfilling desired functions via unknown means.

One does not know how the successful QC's in the ensemble are working, and one cannot know, because to tell, one would have to destroy the systems in question by making wave-function-collapsing observations. In general, the only way to tell the true, multi-universe state of a quantum system is to create an ensemble of identical systems, measure each of them, and compute statistics. But one cannot do this in this case, because there is no way to recreate the exact steps according to which the evolution took place, as these steps themselves involved probabilistic mutations and combinations.

What one has, in EQC, is quantum computation, making full use of the multi-universe power of quantum nonlocality, which does not require collapse in order to be useful to the collapsed world. I.e., one has systems operating across the full spectrum of universes that are useful to systems operating within individual universes, or across narrow bands of universes. The only constraint is that the narrow-universe-band system must not ask the broad-universe-band systems what they are doing.

Surprisingly enough, one can argue that this is a viable model of brain dynamics. Edelman, with his theory of neuronal group selection, has already made a strong case for the brain as an evolutionary system. And Jibu and Yasue have made a good case for the brain as a macroscopic quantum system. Putting these two together, we obtain a strikingly solid case for the brain as an EQC. The EQC explanation of why the human brain is so acutely conscious then fits right in.

The brain is, however, not the only possible evolutionary quantum computer. I will describe here an alternate EQC called a Quantum Evolvable Logic Array that could plausibly be built today, using existing devices (configurable silicon chips and supercooled superconducting rings) as components. This implies that it is possible to build electronic systems displaying the same degree and type of acute awareness as human brains. In fact, I will argue that it is not necessary to go to such engineering lengths to achieve this goal, as pseudo- randomness generated in digital computers is sufficient to provide randomness within subjective perspectives, and objective randomness is not an empirical concept.

Neural Darwinism and EQC in the Brain

The notion of the brain as an evolutionary system has been articulated and promoted most effectively by Gerald Edelman (1987), via his theory of neuronal group selection, or "Neural Darwinism." The starting point of Neural Darwinism is the observation that neuronal dynamics may be analyzed in terms of the behavior of neuronal groups. The strongest evidence in favor of this conjecture is physiological: many of the neurons of the neocortex are organized in clusters, each one containing say 10,000 to 50,000 neurons each.

    Once one has committed oneself to looking at groups, the next step is to ask how these groups are organized. A map, in Edelman's terminology, is a connected set of groups with the property that when one of the inter-group connections in the map is active, others will often tend to be active as well. Maps are not fixed over the life of an organism. They may be formed and destroyed in a very simple way: the connection between two neuronal groups may be "strengthened" by increasing the weights of the neurons connecting the one group with the other, and "weakened" by decreasing the weights of the neurons connecting the two groups.

Formally, we may consider the set of neural groups as the vertices of a graph, and draw an edge between two vertices whenever a significant proportion of the neurons of the two corresponding groups directly interact. Then a map is a connected subgraph of this graph, and the maps A and B are connected if there is an edge between some element of A and some element of B. (If for "map" one reads "program," and for "neural group" one reads "subroutine," then we have a process dependency graph as drawn in theoretical computer science.)

This is the set-up, the context in which Edelman's theory works. The meat of the theory is the following hypothesis: the large-scale dynamics of the brain is dominated by the natural selection of maps. Those maps which are active when good results are obtained are strengthened, those maps which are active when bad results are obtained are weakened. And maps are continually mutated by the natural chaos of neural dynamics, thus providing new fodder for the selection process. By use of computer simulations, Edelman and his colleage Reeke have shown that formal neural networks obeying this rule can carry out fairly complicated acts of perception.

This thumbnail sketch, it must be emphasized, does not do justice to Edelman's ideas. In Neural Darwinism Edelman presents neuronal group selection as a collection of precise biological hypotheses, and presents evidence in favor of a number of these hypotheses. However, I consider that the basic concept of neuronal group selection is largelyindependent of the biological particularities in terms of which Edelman has phrased it. As argued in (Goertzel, 1993), I suspect that the mutation and selection of "transformations" or "maps" is a necessary component of the dynamics of any intelligent system.

Edelman's theory provides half of the argument that the brain is an EQC: it provides evidence that the brain is an evolving system. Edelman uses nonlinear differential equations on finite-dimensional spaces to model the dynamics of neuronal groups; he does not consider these groups as quantum systems. There is much evidence, however, that the brain is not as "classical" a system as Edelman and other more conventional neural net theorists would have it.

Early thinking regarding quantum dynamics in the brain mostly involved quantum objects called Bose-Einstein condensates (Marshall, 1989), which may be capable of forming large but short-lived structures in the brain (Pessa, 1988). Marshall proposed that these condensates form from activity of vibrating molecules (dipoles) in nerve cell membranes, and form the physical basis of mind. There seem to be problems with the details of Marshall's original proposal (Clarke, 1994). But the basic idea of Bose-Einstein condensates in the brain remains sound. Much recent speculation has centered around the condensates occur around the microtubules within neurons' cell walls (Hameroff, 1994). Hameroff has argued that it is not the classical flow of electricity between neurons, but the quantum-nonlocal flow of charge between microtubular structure within cells, that makes up the dynamics of thought. The most refined investigation in these directions is the work of Jibu and Yasue (1996) on water megamolecules in the space between neurons. They make very strong arguments that these molecules can combine to form extended nonlocal quantum systems, operating in parallel interaction with the classical neural networks that are more commonly studied.

I find Jibu and Yasue's perspective quite appealing. Rather than throwing out all we have learned about neural networks, in this view, we must merely accept that there are parallel quantum systems, working together with neural networks to create thought. In terms of Edelman's theory, we need not reject the idea of Neural Darwinism -- we must merely accept that these populations of neuronal maps have a quantum aspect as well as a classical aspect. In other words, the brain is an evolving population of quantum neural networks, selected and mutated based on their functionality in regard to their interaction with perceptual and motor systems, as determined by needs of the organism. Edelman, plus Jibu and Yasue, equals the brain as an EQC.

As neuroscience this is speculative -- but so is, at this stage, everyone's model of the brain. The model of the brain as an EQC fits with all observed data, and has the advantage of incorporating both the now-standard neural network perspective and the emerging quantum brain perspective. And furthermore, as we shall see, it provides a novel and powerful solution to the problem of human consciousness.

Design for an Evolutionary Quantum Computer

The brain, assuming it is indeed an EQC, is only one among many possible types of EQC. For engineering purposes, it is interesting to ask whether it is possible to build an EQC out of currently extant, off-the-shelf components. The answer to this question is, I believe, a definite yes. In this section I will present a specific design according to which one could build an EQC using components currently available for purchase. This design would not be inexpensive to implement, for one of the components must be a device demonstrating macroscopic quantum coherence. The only commonly available, reasonably well-understood such device is the SQUID, or Supercooled Quantum Interference Device, as commonly used in medical imaging devices.

The design given here is based on interfacing an array of SQUIDs with a re-configurable Field Programmable Gate Array (FPGA) logic chip. I call it QELA, for Quantum Evolvable Logic Array. It is genuinely feasible, but is intended more as a "proof of concept" than as a design to be realized in detail. If one were to go about seriously building such a machine, one would first wish to much more carefully research the various possible components and designs.

A SQUID is a superconducting ring interrupted by a small Josephson tunnel junction. It supports currents in a loop structure, the idea being that the current flow can be in a superposition of clockwise and anti-clockwise directions. Upon observation, the current flow will suddenly switch itself to one direction or the other; but when unobserved, the system lives equally in two universes -- one with the current flowing clockwise and the other with the current flowing counterclockwise. To be precise, the two distinct macroscopic states of the flux of the SQUID correspond to currents circulating in opposite directions through an inductance of 0.2 nH. Many quantum effects such as resonant tunneling, photon-assisted tunneling and population inversion, usually seen only in microscopic systems, have been observed in the SQUID (Han, 1996; Diggins et al, 1994; Leggett, 1984). Furthermore, the effect of environmental noise on SQUIDs is marked and fascinating: the tunneling characteristics of the macroscopic flux have been shown to change dramatically from resonant to continuous as the damping is increased.

The SQUID is known today as the most sensitive existing device for magnetic field detection. It has been extensively developed for traditional low temperature superconductors requiring cooling with liquid Helium to 4 Kelvin (-269C) and is commercially available from several suppliers. In 1987 high temperature superconducting ceramics were discovered only requiring cooling to the temperature of liquid air, 77 Kelvin (-196C), and SQUID sensors have now been developed based on these new materials. The bare SQUID device has a typical magnetic field sensitivity of 2 pT, while for SQUID's coupled to a superconducting input coil, 100fT has been demonstrated. This corresponds to an energy resolution better than 1030 J/Hz. In a bandwidth of e.g. 1 Hz this is equivalent of lifting one hydrogen atom 10 cm in the gravitational field.

One could use output from a SQUID or other macroscopic quantum object to control inputs to an ordinary computer, in various ways, but a more interesting prospect to consider in this context is configurable hardware. What is configurable hardware? Observe that, at present, we have two main methods for implementing mathematical algorithms in computers: hardware and software. In the hardware model, we implement an algorithm by wiring connections between physical devices. In the software model, we implement an algorithm by creating a series of instructions to be fed to a fixed physical device, whose connections are created without mind to the particular algorithm being implemented. Configurable hardware is a third approach, in which the interconnection between active logic elements is dependent on a control store, manipulable through software (Gray and Kean, 1989). The standard implementation of configurable hardware today is the Field Programmable Gate Array (FPGA), which allows the implementation of multi-level logic functions via a regular routing structure which includes simple programmable switches.

FPGA's are categorised in two distinct groups: re-configurable and non re-configurable devices (Ebeling et al, 1991). Re-configurable FPGA's, the type of interest here, use static random access memory (SRAM), erasable programmable read only memory EPROM or electrically erasable programmable read only memory (EEPROM) programming technologies for implementing programmable switches. Non re-configurable FPGA's use the antifuse programming technology for implementing programmable switches (an antifuse is a two terminal device which creates an irreversable link when a voltage is applied across it). Re-configurable FPGA's are ideal for the kind of quantum-classical interface I am considering here. In fact, John Koza (personal communication, 1997) has experimented with these devices for evolutionary programming -- he has evolved circuit configurations to carry out various tasks. By giving macroscopic quantum input to such a chip, one automatically creates an evolutionary quantum computer corresponding to Koza's evolving conventional chip.

In order to use SQUIDs to provide quantum input to a computer, one simply needs to create a connection that outputs the magnetic fluxes from an array of SQUIDs to electronic switches within an FPGA. The binary value of a single logical switch can be determined by whether the flux of a single SQUID is clockwise or counterclockwise. Clockwise means 0, counterclockwise means 1, etc. One then obtains a logic circuit whose crucial switches have quantum indeterminate values: whether they are 0 or 1 is fundamentally indeterminate. Furthermore, the settings of each individual quantum switch will affect the charge passing through each other quantum switch, via the natural dynamics of charge flowing through the logic gate array, and so the whole system of SQUIDs plus FPGA will be a macroscopic quantum system, demonstrating quantum coherence and indeterminacy: a multi-universe logic array!

Measuring the state of the configurable hardware would collapse the states of the SQUIDs to definite directions. But measuring the output of of the hardware need not do so. And the beauty of evolutionary programming is that one is indeed judging one's logic system only by its results, not by how it goes about getting these results. All that is needed to turn this SQUIDs-plus-FPGA system into an elegant evolutionary quantum computer is some way to cause unsuccessful FPGA's to mutate more than successful FPGA's. But this is also easy; one can always stimulate the SQUIDs feeding into unsuccessful FPGA's with extra charge, so as to cause their flux to flip in a random way, thus providing extra noise to these systems. Furthermore, one can copy successful FPGA's, without measuring their state, as follows: to copy FPGA A, just take another FPGA B identical to A, and connect each switch of B to the same SQUID that the corresponding switch of A is connected to -- without measuring the information passing from the SQUIDs to the FPGA's. Minor variants on successful quantum FPGA's can also be created this way, by copying a successful unit and then connecting some switches to other, newly randomized SQUIDs instead of those used by the unit being copied. Furthermore, two SQUIDs A and B can be crossed over in the manner of sexual reproduction, by the creation of a new FPGA called C which uses half of the SQUID inputs from A and half of them from B. All of these "genetic" operations (Goldberg, 1988) can be carried out without collapse of the wave function, without observation of the states of individual SQUIDs.

A very rough schematic diagram of this proposed EQC design -- the QELA, or Quantum Evolvable Logic Array -- is given in Figure 1.

----------------------------------------------------   

FIGURE 1

SCHEMATIC DESIGN FOR 
QUANTUM EVOLVABLE LOGIC ARRAY



1           2                             3  
  /|<------ /|                           /|            
 / |       / |                          / |  
 | | <-----| |-- -----------------------| |
 | |<------|/--|                        | |
 |/        |   |------------------------|/
  |        |
  |        |
  |        |
  |--______|
     |   |
     |_ _| 4.
      

1.                       3.
panel of			 panel of
re-configurable          SQUIDs    
FPGA's		                  

2. converter that takes SQUID magnetic flux output
     and uses it to set switches of FPGA's
     
4. conventional digital software process that supervises the
     apparatus, sending flux-resetting charge to SQUIDs feeding
     unsuccessful FPGA's

------------------------------------------------------------

The crucial point is that the FPGA's are not "programmed". Each one is "evolved" by the software controlling process. Each FPGA has certain inputs and certain outputs. The controlling process continually feeds each region with inputs and monitors the outputs -- but does not monitor what goes on inbetween (to preserve quantum indeterminacy). It knows what behavior it wants from each region, and it resets the SQUIDs of the unsuccessful FPGA's in order to jolt them into more useful behavior. Furthermore, it may create new quantum FPGA's by mutating and crossing over the successful ones. The desired behavior from an FPGA may be defined as the learning of some fixed function, or "ecologically" in terms of the behaviors of the other regions.

What good would such a system be? Intuitively, the function of the QELA in an AI context is simple: it provides pattern recognition, by automatic inference of functions from input/output pairs, and it provides it with quantum ultra-efficiency by searching all possible universes at once. This capacity could be useful in many different contexts. For example, in the Webmind AI system we are currently building at IntelliGenesis Corp. (see Goertzel, 1996), one has a network of nodes carrying information, and each node has its own resident pattern-recognition processes, that recognize patterns in the information at that node, and emergent between the information at that node and the information at other nodes. In this context, the QELA could, via its software controller, be used by Webmind to supply pattern recognition at nodes. One would thus have an overall coupled quantum-classical AI system.

Note that the measurement carried out by the controlling software process collapses the chip into that region of Hilbert space consisting of all possible programs consistent with the observed I/O behavior -- but does not collapse it any further. Thus, the chip is carrying out all possible programs consistent with the given I/O behavior. This is a fundamentally new approach to machine learning. Instead of settling on one inferred automaton consistent with the desired behaviors, one takes a weighted sampling of the whole space of automata consistent with the behaviors and the underlying machinery.

Why Humans Are So Acutely Conscious

As noted above, many thinkers have posited a connection between consciousness and quantum randomness. At first this may seem an overly facile attempt to resolve two difficult issues -- quantum measurement and experienced consciousness -- by equating them with each other. But a careful philosophical study of the issues involved reveals there to be more to the issue.

As I have argued in detail in (Goertzel, 1997), the relation between consciousness and randomness is a deep one. Randomness, after all, is mathematically defined as indescribability (Chaitin, 1988). A random number is one that admits no finite description. And the essence of the qualia, the experienced moment, is precisely its elusivity, the way it always escapes one's grasp. The flow of time may be phenomenologically characterized as the process of qualia repeatedly escaping their own grasp. Thus it was that William James, and his friend Charles S. Peirce, equated the experienced moment with "pure chance" -- i.e. randomness -- well before the quantum indeterminacy of the physical universe was established.

But if chance is consciousness then everything in the universe is conscious, as nothing is ever totally deterministic -- everything has some element of chance to it. Quantum theory teaches us that the universe is indeterminate, but observing the universe makes it appear definite. In other words, definiteness is characteristic of subjective views of the universe, but not of the universe apart from anybody's subjective view. The equations of quantum theory tell us that all subjective views are in a sense "equivalent" -- but they are not equivalent to the "objective" or intersubjective universe, which is the collection of all possible subjective views, and is therefore a probability distribution rather than a definite entity. Consciousness, then, is the property that everything has when it is being considered intersubjectively instead of as an object within the fixed subjective world of some other object.

What is wrong with everything being conscious? Nothing -- panpsychism is the oldest theory of consciousness, and the only theory of consciousness not riddled with contradictions. However, pure panpsychism is an insufficiently informative doctrine, as it does not tell us why some entities in the world should be significantly more conscious than others. I.e., why are humans more conscious than cats, or birds, or worms, amoebas, rocks, atoms?

One might deny that this is the case, that any one thing is more conscious than any other thing. This is a perfectly valid perspective, but not a terribly informative one; in taking this view, one is missing out on important aspects of the intuitive notion of consciousness. It is therefore interesting to think about ways to use the basic concept of quantum panpsychism to measure the degrees of consciousness of various entities. The definition that is most attractive to me, in this regard, is one that I call systemic consciousness. The degree of systemic consciousness of an entity is defined to be proportional to the extent to which that entity employs quantum randomness in the building of new patterns and the active maintenance of old ones.

The idea here is quite simple. I.e., if one avers that pattern is the fundamental stuff of the mind and universe (Goertzel, 1993; Bateson, 1980), and that consciousness has to do with randomness, then it follows that the more a system uses randomness to produce and maintain patterns, the more conscious that system is. In different language, we may say that systemic consciousness is utilization of raw, quantum panpsychic consciousness/nonlocality/randomness for evolution and autopoiesis of emergent structures. And this, obviously, is where evolutionary quantum computing enters the picture! EQC is a mechanism by which brains employ quantum indeterminacy, in large quantity, to solve problems, perceive forms, and create and maintain patterns.

There are many algorithms that employ randomness to help solve problems. Mary Ann Metzger (1997) has discussed the random "temperature" in the simulated annealing algorithm for neural net evolution, and proposed that it should be equated with consciousness. I believe that she is on the right track, but does not penetrate far enough into either the nature of randomness or the dynamics of the brain. Simulated annealing uses randomness, quantum or otherwise, as a simple control parameter. Once the solution is found, the temperature is set equal to zero. This is different from EQC. EQC can use a dynamic of progressive randomness decrease too; in fact, the QELA design described above does exactly this. But the final "temperature" or amount of randomness will not be zero. Instead, the quantum uncertainty will be part of the solution. The effectiveness of the solution to the problem is contingent on processing taking place in many universes simultaneously, and hence on the noncollapse of the wave function, the nondecline of the degree of randomness to zero.

So, the moral of EQC for awareness is this. All entities are conscious, but some are, in the sense of systemic consciousness, more conscious than others. EQC is one strategy for achieving a high degree of systemic consciousness; and no other strategy of comparable effectiveness has been proposed or discovered. It is the brain's internal evolutionary dynamic that makes it so creative and imaginative; and it is this same evolutionary dynamic that makes the brain so acutely conscious, so able to correlate macroscopic quantum indeterminacy with the creation of definite patterns in the world -- within some particular observers point of view.

The ultimate subtlety here is that definiteness is, in the quantum picture, relative. To see this one has to think about the nature of quantum wave function collapse. The quantum universe is fundamentally probabilistic, but observation are said to collapse it into definiteness. Thus, if I observe a system, then for me, the system's state becomes definite when I observe it. On the other hand, if you get your information about the system's state by asking me, then for you, the system's state only becomes definite when you ask me, not when I observe it. This is called the "paradox of Wigner's friend." What it means is that it is only within a subjective point of view that definite patterns exist, instead of probabilistic superpositions of patterns. In the transpersonal, intersubjective quantum reality, there is no collapse to definiteness. But in the subjective view of a given entity, wave functions collapse when they meet that entity, and the spectrum of universes is significantly compressed. And systemic consciousness occurs when, within this compressed spectrum of universes, produced by system X, subsystems of system X that exist in the broad spectrum of universes play an important creative role. This is what EQC accomplishes with unparalleled effectiveness, by its trick of constructing broad-universe-spectrum (quantum) systems via collapsing their results only, and not their internal mechanisms.

It is important to understand what is being claimed here. Qualia are acknowledged as inexplicable. Raw awareness exists on a level beneath rational understanding; quantum indeterminacy is a manifestation of raw awareness within modern scientific theory. Raw awareness is not measurable, in itself. However, it is measurable in its relation with empirical patterns, and in this guise I have called it systemic consciousness. Systemic consciousness is quantifiable and it would seem that brains have more of it than nearly all other systems we know of. The reason brains have more of it is, I claim, that brains are evolutionary quantum computers.

The definition of systemic consciousness in terms of raw awareness is admittedly a philosophical trouble spot, and perhaps deservesa little more comment. The trouble here actually traces back to the very point at which raw awareness is first considerd as having been definitely identified, so as to be "correlatable" with a system's activities. After all, randomness is not strictly identifiable. The definition of systemic consciousness presumes that one has identified the unidentifiable, that one has determined which segments of a certain system are truly random and which are not. This is in principle impossible.

But the catch is, of course, that we make such judgements all the time. The definition of systemic consciousness is therefore meaningful relative to a given system's subjective judgements of randomness We emerge with a notion of systemic consciousness as the ability of system X to correlate macroscopic quantum randomness, as perceived by system Y, with the creation of patterns that are collapsed and definite in the subjective world of system Y. I.e., we have to think about whose subjective criterion of randomness is being used here, as well as who is collapsing reality into definiteness. Ultimately, what is most important is the system's own self-perception: its ability to correlate macroscopic randomness as perceived by itself with the creation of patterns that are definite in its own subjective reality. This is what makes us acutely aware: we take the void, the unclassifiable, the nothing, the incomprehensible moment within us, and we use it to create new things and to preserve the current contents of our minds. This is the loop of awareness that goes through our minds day in and day out; it is something that all entities do, but we do with greater intensity than others, due to our internal EQC process which allows substantial macroscopic interaction between quantum randomness and classical behaviors.

Finally, let us return to practical engineering matters. The QELA, as described above, is a highly specialized hardware device -- it is worth asking whether, according to the EQC model, it is possible for an ordinary digital computer to manifest acute consciousness in the manner of the human brain. This is a deep question which requires further research, but my tentative conclusion from the foregoing is that the answer is yes. My reasoning is as follows. Suppose one constructed a digital computer to simulate the laws of quantum physics, and built a digital brain within this simulated multiverse. Then, the laws of quantum physics would not really be adhered to by this simulation, because there would only be a finite spectrum of universes, and because wave function collapse (universe selection) would be occurring according to some pseudo-random number generator rather than by true randomness. However, from the point of view of the system itself (as well as, incidentally, from the point of view of human observers), this pseudo-randomness would pass perfectly well for true randomness; and the number of universes in the simulation would be so large as to appear subjectively boundless. Therefore, the system would be systemically conscious by the above definition.

More clearly and more intriguingly, however, one may ask whether the Internet, as a system involving the synergetic combination of humans and digital computers, could ever be acutely conscious. And the answer here is extremely clearly yes, because, apart from the question of the acute consciousness of the digital component, the human component of the system provides acute consciousness. One can view the digital component of the Internet as the FPGA in the QELA design, and the human component of the Internet as the SQUID component, providing quantum uncertain, quantum nonlocal input. It is clear that the future of computing holds many fascinating possibilities, at which contemporary digital computer design barely begins to hint.

Conclusion

The tale of evolutionary quantum computing and awareness is a complicated story about something very simple. It is complicated not because consciousness or systemic consciousness or EQC are complicated -- they are perfectly simple -- but because the language of science is complicated and is furthermore ill-suited to the discussion of consciousness. All fancy verbiage and biological, engineering and mathematical details, aside, what we have here is: raw awareness, raw awareness correlated with creative and self-sustaining activity, and raw awareness allowed to be correlated with creative and self-sustaining activity by a system structure that judges raw awareness only by its results, and not by an awareness-killing scrutiny into the details of how raw awareness works. That's the whole story.

This is a speculative theory of human consciousness, to be sure, in the sense that the brain has not yet been conclusively shown to be a macroscopic quantum system. However, the theory is absolutely clear in its conceptual logic, and is consistent with everything known about the brain at present. In fact, it is the only theory extant which integrates recent results suggesting quantum brain function with more conventional neural network models of the brain (i.e., Neural Darwinism). Furthermore, it has concrete and fascinating implications for next-generation computer engineering. What is speculative today may well be common sense tomorrow!

References

Bateson, Gregory (1980). Mind and Nature. New York: Bantam Books.

Chaitin, Gregory (1988). Algorithmic Information Theory. New York: Addison-Wesley.

Clarke, C.J.S. (1994), "Coupled Molecular Oscillators Do Not Admit True Bose Condensations" Journal of Physics A.

del Giudice E., Doglia S., Milani M. and Vitiello G. (1986), "Solitons and Coherent Electric Waves in a Quantum Field Theoretical Approach", in Modern Bioelectrochemistry, ed. F. Gutmann and H. Keyzer (Plenum: New York).

Deutsch, David (1985). "Quantum Turing Machines," Proc. Royal Soc. London

Diggins, J., Spiller, T. P., Clark, T. D., Prance, H., Prance, R. J., Ralph, J. F., Van den Bossche, B., Brouers, F.: "Macroscopic superposition and tunnelling in a SQUID ring", Phys. Rev. (1994)

Ebeling, C., G. Borriello, S. A. Hauck, D. Song and E. A. Walkup" (1991), TRIPTYCH: a New FPGA Architecture", International Workshop on Field Programmable Logic and Applications", pp. 75-90

Edelman, Gerald (1988). Neural Darwinism. New York: Basic Books.

Goertzel, Ben (1993). The Structure of Intelligence. New York: Springer-Verlag

Goertzel, Ben (1993). The Evolving Mind. New York: Gordon and Breach.

Goertzel, Ben (1996). "Webmind White Paper," online at http://intelligenesis.net

Goertzel, Ben (1997). "Chance and Consciousness," in Mind in Time, Edited by Combs, Germine and Goertzel, New York: Hampton Press

Goldberg, David (1988). Genetic Algorithms for Search, Optimization and Machine Learning, New York: Addison-Wesley.

Gray, J. P. and T. A. Kean" (1989), "Configurable Hardware: A New Paradigm for Computation", Proceedings of Decennial CalTech Conference on VLSI, pp. 277-293

Hameroff, S.R. (1994), "Quantum Coherence in Microtubules: a Neural Basis for Emergent Consciousness", Journal of Consciousness Studies, 1, pp. 91-118.

Han, Siyuan (1996), "Exploring the Quantum Nature of a Macroscopic Variable: Magnetic Flux in a SQUID," paper presented at APS Conference, St. Louis.

Hempfling, Lee Kent (1997), "The CORE Processor," published online at http://www.enticypress.com/

Jibu and Yasue (1996). The Quantum Theory of Consciousness, New York: John Benjamins.

Leggett, A.J. (1984). Contemp. Phys. 25, p. 583

Marshall, I.N. (1989), "Consciousness and Bose-Einstein Condensates", New Ideas in Psychology. 7, pp. 73-83.

Metzger, Mary Ann (1997), "Consciousness By Degrees," preprint

Nunn C.M.H (1994), "Collapse of a Quantum Field May Affect Brain Function", Journal of Consciousness Studies, 1, p. 128.

Pessa, E. (1988), "Symmetry Breaking in Neural Nets", Biological Cybernetics, 59, pp. 277-81. Wheeler, John and W. Zurek (Editors). The Quantum Theory of Measurement. San Francisco: W.H. Freeman.