|Back to Ben Goertzel's Research Papers and Essays|
The Emergence of Global Web Intelligence and How it Will Transform the Human Race
Anyone who has had the privilege to give birth, or to watch someone else give birth, knows what a wonderful, exciting, magical event it is. This is so even when, as in the birth of a human, the organism being born has very little new about it -- it is "just" a replication, with variations, of already existing creatures.
How much more exciting, then, is the birth of an entirely new kind of intelligent organism. And this is precisely, I suggest, what those of us who are alive and young today are going to have the opportunity to experience: the birth of an entirely new kind of intelligent organism, a global Web mind. The Internet as it exists today is merely the larval form of something immensely more original and exciting.
Speculation as to the future of the Internet is nothing new. In fact, a great deal of the current "Net fever" has to do with what Net will someday become, rather than what it presently is. The high rate of change of Internet technology, and computer technology, virtually guarantees that the Net of 10, 20 or 50 years hence will bear little overt resemblance to the Net of today.
As to exactly what the Net will become, however, there have been very few detailed answers. Discussion tends to focus on replacing current non-digital technologies with superior Net equivalents, rather than on the possibility for something fundamentally new. It is observed that electronic publishing is fast becoming a reality; in certain areas, such as science, the Web has already radically changed the sociology of information distribution. It is argued that e-mail and electronic voice communication are well on their way to obsoleting conventional post and telelphone service. Electronic commerce, it is said, will transform the economy, and along the way it will help the Net to grow even faster than it already has. All these things are interesting and important; in the end, however, they are merely extensions of existing aspects of human society into the electronic domain. Surely the Net, like any fundamentally new technology, is going to do something more than this. But what?
The notion of an emerging global Web mind consitutes a radical answer to this question. The Net as it exists today is comparable, I suggest, to the mind of a very young child --a child who has not yet learned to think for herself, or even to distinguish herself from the outside world. What we will see over the next couple decades is the growth and maturity of this infantile mind into a full-fledged, largely autonomous, globally distributed intelligent system.
This vision of a global Web mind is related to several other visions of the digital future. For instance, many have envisioned a future Net populated by artificially intelligent entities, interacting in virtual worlds. This vision was portrayed most memorably by William Gibson in Neuromancer. While this is a reasonable idea, and it does not contradict my own thinking in any way, it is different from what I am projecting, which is that the Net itself will become a global intelligent system -- a World Wide Brain.
On the other hand, Hans Moravec and others have envisioned that humans will eventually "download" themselves into computers, and lead bodiless digital lives. This is also related to, but different from the idea of a global Web mind. Initially, I propose, the global Web mind will exist as an entity separate from human beings: we will interact with it via our computer terminals, and more creative interface devices such as VR headsets. This will be Phase One of the global Web mind. But eventually, I propose, most if not all humans will be drawn into the Web mind itself, and the Web mind will become a kind of collective human/digital entity. In this second phase, Moravec's vision will become fulfilled, but with two twists: we will be incorporating ourselves into an "alien" intelligence of our own creation; and we will be partially synthesizing ourselves with other humans, rather than merely downloading our individual human selves into a digital world.
This may seem a grandiose projection, more science-fictional than scientific. However, this impression is false. In fact, the idea of a global Web mind is well grounded in contemporary complexity science cybernetics and cognitive science. And furthermore, it is a natural extension of current Web technology. Using ideas from modern science, one can design Web software that, if implemented today, would help boost the Web out of the larval stage and into the initial stages of true global intelligence.
The change which is about to come upon us as a consequence of the emergence of the global Web mind will be a very large one -- comparable in scope, I believe, to the emergence of tools, or language, or civilization. What will distinguish this change from these past ones, however, is its speed. In this sense, those of us who are alive and young today are very fortunate. We will be the first human generation ever to witness a major change in collective humanity with a time span the order of magnitude of a single human lifetime.
The global Web mind is a big topic, difficult to do justice to in a brief article. In all probability, there will one day be hundreds of books -- and thousands of Web pages -- devoted to it. In the limited space that I have here, however, I will nonetheless attempt to cover most of the important bases.
I will begin with computing: I will explain the role of the global Web mind in the history of computing; and I will give some detailed ideas regarding how one can use ideas from theoretical cognitive science to actually design a particular kind of global Web mind. And then I will turn to the more philosophical side of things: I will discuss the notion that the global Web mind may be only part of a move toward a "global brain" that literally incorporates all of humanity. This is a fascinating question, which gives rise to fascinating questions such as "Will the global brain be sane or insane?", and which ultimately leads on to the spiritual implications of global intelligence.
The conclusion of this diverse discussion is striking and definite. The global Web mind is coming soon: it is achievable with current technology, and with near-future technology will become easy to implement. Furthermore, it does have deep philosophical and spiritual meaning. It represents a digital concretization of Jung's collective unconscious -- an active, living repository of the most abstract and subtle emergent patterns of the modern human mind. It is both the ultimate realization of human intelligence and culture -- and the next step beyond.
From World Wide Web to World Wide Brain
The emergence of the global Web mind will, I believe, mark a significant turning-point in the history of intelligence on Earth. On a less cosmic level, it will obviously also play an important role in the specific area of computing. It will bring together two strands in computer science that have hitherto been mostly separate: artificial intelligence and networking. As we watch the initial stages of the global Web mind emerge, the interplay between the Net and AI will be intricate, subtle and beautiful. In the global Web mind, both networking and artificial intelligence reach their ultimate goals.
Let us begin by placing the emerging global Web mind in its proper place in the history of the Internet. Conventional wisdom divides the history of the Internet up till now into three phases:
1. Pre-WWW. Direct, immediate interchange of small bits of text, via e- ail
and Usenet. Indirect, delayed interchange of large amounts of text, visual images and computer programs, via ftp.
2. WWW. Direct, immediate interchange of images, sounds, and large amounts of text. Online publishing of articles, books, art and music. Interchange of computer programs, via ftp, is still delayed, indirect, and architecture-dependent.
3. Active Web. Direct, immediate interchange of animations and computer programs as well as large texts, images and sounds. Enabled by languages such as Java, the Internet becomes a real-time software resource. "Knowbots" traverse the web carrying out specialized intelligent operations for their owners.
The third phase, which I call the Active Web phase, is still in a relatively early stage of development, drive mainly by the dissemination and development of the particular programming language Java. However, there is an emerging consensus across the computer industry as to what the ultimate outcome of this phase will be. For many applications, people will be able to run small software "applets" from WWW pages, instead of running large, multipurpose programs based in their own computers' memories. The general-purpose search engines of the WWW phase will evolve into more specialized and intelligent individualized Web exploration agents -- "infobots" or "knowbots." In short, the WWW will be transformed from a "global book" into a massively parallel, self-organizing software program of unprecented size and complexity.
The Active Web is exciting. But it should not be considered as the end-point of Web evolution. I wish to look one step further, beyond the Active Web to the intelligent Web -- or, in short, the global Web mind.
The Active Web is a community of programs, texts, images, sounds and intelligent agents, interacting and serving their own ends. The global Web mind is what happens when the diverse population of programs and agents making up the Active Web locks into an overall, self-organizing pattern -- into a structured strange attractor, displaying emergent memory and thought-oriented structures and dynamics not directly programmed into any of its parts. At this point traditional ideas from computer science become almost useless -- one is moving into new territory. We are almost there today. In order to grapple with the logic of the global Web mind, one requires new ideas, drawn from the science of complex systems, from cybernetics, cognitive science and dynamical systems theory.
It may seem hasty to be talking about a fourth phase of Web evolution -- a Global Web mind -- when the third phase, the Active Web, has only just begun, and even the second phase, the Web itself, has not yet reached maturity. But if any one quality characterizes the rise of the WWW, it is rapidity. The WWW took only a few years to dominate the Internet; and Java, piggybacking on the spread of the Web, has spread more quickly than any programming language in history. Thus, it seems reasonable to expect the fourth phase of Internet evolution to come upon us rather rapidly, certainly within a matter of years.
I wish to stress the difference between the use of artificially intelligent agents to serve functions within the Web, which is part of the Active Web phase, and the achievement of significant intelligence by the Web as a whole, or by large subsets of the Web, which characterizes the global Web brain. There is a world of difference here (or perhaps I should say, "a mind of difference"). In the first case, one has small AI agents that harness the power of a single computer or local-area network. The Web is the home of such an agent, rather than the mind of such an agent. Such agents may form components of the global Web mind, but the key in this case is in the way the actions of the different agents are orchestrated together. Just any population of intelligent agents will not synergize into an overall emergent Web brain -- this is no more likely than taking an arbitrary collection of cortical columns, interconnecting them willy-nilly, and obtaining a functioning cerebral cortex. To have an intelligent system one needs not only clever parts, but also clever organization; and this is the essential difference between the Active Web and the global Web brain.
In principle, it would be possible to have a truly intelligent AI agent living within the Active Web. In the long term, this may well occur, when we have sufficiently powerful computers. In the short term, however, no individual computer has the power to support human-level intelligence; but the totality of computers connected to the Web, taken together, do possess this power. Thus we see that the Internet provides an immediate solution to what has always been the most serious problem of the discipline of artificial intelligence: the problem of computing power, or as it is called, the problem of scale.
The problem of scale is, quite simply, that our best computers are nowhere near as powerful as a chicken's brain, let alone a human brain. As a consequence, one is always implementing AI programs on computers that, in spite of special-purpose competencies, are far less computationally able than one really needs them to be. And so one is always presenting AI systems with problems that are far, far simpler than those confronting human beings in the course ordinary life. When an AI project succeeds, there is always the question of whether the methods used will "scale up" to problems of more realistic scope. And when an AI project fails, there is always the question of whether it would have succeeded, if only implemented on a more realistic scale. In fact, one may argue on solid mathematical grounds that intelligent systems should be subject to "threshold effects," whereby processes that are inefficient in systems below a certain size threshold, become vastly more efficient once the size threshold is passed.
Some rough numerical estimates may be useful. The brain has somewhere in the vicinity of 100,000,000,000 -10,000,000,000,000 neurons, each one of which is itself a complex dynamical system. There is as yet no consensus on how much of the internal dynamics of the neuron is psychologically relevant. Accurate, real-time models of the single neuron are somewhat computationally intensive, requiring about the computational power of a low-end Unix workstation. On the other hand, a standard "formal neural network" model of the neuron as a logic gate, or simple nonlinear recursion, is far less intensive. A typical workstation can simulate a network of hundreds of formal neurons, evolving at a reasonable rate.
Clearly, whatever the cognitive status of the internal processes of the neuron, no single computer that exists today can come anywhere near to emulating the computational power of the human brain. One can imagine building a tremendous supercomputer that would approximate this goal. However, recent history teaches that such efforts are plagued with problems. A simple example will illustrate this point. Suppose one sets out, in 1996, to build a massively parallel AI machine by hard-wiring together 100,000 top-of-the-line chips. Suppose the process of design, construction, testing and debugging takes three years. Then, given the current rate of improvement of computer chip technology (speed doubles around every eighteen months), by the time one has finished building one's machine in 1999, its computational power will be the equivalent of only 25,000 top-of-the-line chips. By 2002, the figure will be down to around 6,500.
Instead of building a supercomputer that is guaranteed to be obsolete by the time it is constructed, it makes more sense to utilize an architecture which allows the continuous incorporation of technology improvements. One requires a highly flexible computer architecture, which allows continual upgrading of components, and relatively trouble-free incorporation of new components, which may be constructed according to entirely new designs. Such an architecture may seem too much to ask for, but the fact is that it already exists, at least in potential form. The WWW has the potential to transform the world's collective computer power into a massive, distributed AI supercomputer.
Once one steps beyond the single-machine, single-program paradigm, and views the whole WWW as a network of applets, able to be interconnected in various ways, it becomes clear that, in fact, the WWW itself is an outstanding AI supercomputer. Each Web page, equipped with Java code or something similar, is potentially a "neuron" in a world-wide brain. Each link between one Web page and another is potentially a "synaptic link" between two neurons. The neuron-and-synapse metaphor need not be taken too literally; a more appropriate metaphor for the role of a Web page in the WWW might be the neuronal group (a cluster of closely interconnected neurons). But the point is that, in principle, modern technology opens the possibility for the Web to act as a dynamic, distributed cognitive system. The Web presents an unprecedentedly powerful environment for the construction of large-scale intelligent systems. As the Web expands, it will allow us to implement more and more intelligent global Web minds, leading quite plausibly, in the not too far future, to a global AI brain exceeding the human brain in raw computational power.
The Structure of Web Intelligence
The notion of emergent Web intelligence is beyond the reach of traditional computer science, not to mention traditional psychology, neurobiology, or philosophy of mind. Given this fact, one might justifiably take a "wait and see" approach, an approach which says: "Once the Web mind is here, then surely we will begin to make progress toward understanding what it is and how it works. Why hurry things?"
However, this approach is only appropriate if one does not consider it important to influence the formation of the global Web mind. If one wishes to actively participate in the genesis of such a system, then one must understand something about the system before it is here in full-fledged form. I believe, in fact, that the early stages of development of the global Web mind are extremely important; and that, because of this, it is a crucial task for us to understand as much as we can about emergent Web intelligence right now.
What one needs in order to understand the incipient global Web mind, I suggest, is a good, solid general theory of mind -- a theory of mind that goes beyond the specifics of human brains and serial computers, and explains the fundamental structures and dynamics of mind in a way that does not depend on any particular physical substrate. Such a theory of mind is not to be found in mainstream computer science or mainstream psychology or philosophy of mind. But it is, I suggest, to be found in the science of complex systems -- in cybernetics, cognitive science, and dynamical systems theory.
The study of mind using complex-systems ideas is in fact a reasonably old tradition, going back to the the mid-century cyberneticians and General Systems Theorists. Early theorists along these lines, however, were hamstrung by the difficulty of the mathematics involved. Today we can avoid much of the mathematics by recourse to computer simulations, which is a tremendous boon. Researchers can now construct computer simulations of complex-systems-theoretic intelligent systems, to use complex-systems-theoretic computational tools to analyze data derived from the mind and brain. Some research in this direction is described at the Society for Chaos Theory in Psychology site, http://www.vanderbilt.edu/ AnS/psychology/cogsci/chaos/cspls.html and at Serendip, an interdisciplinary cognitive science site at Bryn Mawr University, full of cool, educational Java applets ( http://serendip.brynmawr.edu/ ).
The flavor of research in this area may be gotten across by mentioning some current research projects. In one of my own current projects, I am constructing a new computational system called the SEE or Simple Evolving Ecology Model, which incorporates ideas from genetic algorithms and mathematical ecology to form a new kind of learning and memory system, to be used for such applications as automatic categorization, and time series prediction. This illustrates the application of complex systems science to the construction of AI systems. On the other hand, I am also collaborating with the psychologist Allan Combs in using computational pattern recognition methods to analyze the way people's moods change over time. We are seeking to elicit the "language of mood" -- the formal linguistic structures that underly mood shifts. This illustrates a more psychological type of research.
While this kind of detailed work is important and interesting, however, my main focus over the last decade has been on using complex systems science to arrive at a good general understanding of the nature of mind. Rather than seeking to construct small-scale semi-intelligent systems, as in conventional AI, or seeking to make experimental predictions about human behavior in laboratory situations, as in conventional psychology, I have sought to answer the general questions: By what mathematical and conceptual principles are minds, in general, structured? How, if one were given a sufficiently powerful computer, would one go about programming an highly intelligent mind? These are philosophical questions, perhaps, but I have not sought philosophical answers, but rather mathematical and scientific ones. The answer I have arrived at is a comprehensive mathematical and philosophical system that I call the psynet model (psy for "mind"; net for "network"). This model has been presented in detail in a series of moderately technical books (The Structure of Intelligence and The Evolving Mind, published in 1993; Chaotic Logic, published in 1994; and From Complexity to Creativity, to be published in 1997), rather crudely-produced electronic versions of which may be found on my Website (http://godel.psy.uwa.edu.au/).
The psynet model of mind has not been empirically demonstrated in any specific and definitive way, but it has been shown to be consistent with the full range of data about the mind that is available from computer science, psychology, neurobiology and philosophy. It possesses a conceptual coherence and mathematical elegance that reminds one of physics more than psychology or AI. Most importantly for the present purposes, it provides a way of thinking about intelligence that is not tied to humans, or tied to bodies of any kind at all, but rather applies to mind in the abstract -- mind in any kind of system, including a globally distributed network of computers like the World Wide Web.
The psynet model achieves its abstractness by assuming that mind is mathematical. The stuff of mind , in other words, is abstract pattern. Mind is not made of matter, it is made of abstract forms, or, considered dynamically, abstract processes. Forms and processes can be realized by physical systems, such as brains or computers, but that doesn't make these brains or computers minds. This implies is that a virtual mind is just as good as a physical mind. Psychology, in the abstract, should apply just as well to digital systems as to biological systems.
Using concepts drawn from the psynet model, one can derive a very specific programme for guiding the emerging global Web mind toward maturity. Making the initial assumption of a Web populated by interacting, intelligent agents, the psynet model tells one how these agents should interact with each other in order to give rise to the global, emergent structures of mind. It even suggests particular types of software programs that, if implemented throughout the Web, would greatly accelerate the evolution of the global Web mind.
The psynet model begins with a new metaphor for the mind -- a metaphor, I suggest, which is particularly appropriate to a distributed, heterogeneous, chaotically fluctuating system like the Web. This metaphor is a simple and unorthodox one: the mind is a "magician sytem." It is a structured community of super-powerful magicians, each of whom is able to cast spells transforming magicians into different magicians.
Traditional AI had the metaphor that mind was made of rules. Connectionist AI views mind as neurons, and patterns of activation among neurons. In the psynet model, on the other hand mind is made of magicians -- autonomous, subjective agents with a variety of powers, including the power to transform one another. In an abstract, mathematical sense, all these views of mind are interchangeable -- in practice, however, they lead one in rather different directions.
In traditional computer science, these interacting, intercreating form/processes that I call "magicians" might be called instead agents. To avoid the conceptual baggage associated with existing terms such as "agents," however, I have opted to introduce a new term. Agents are very general; they are basically entities that interact with each other. Magicians are more specific, in that they do not merely send messages to one another, they actively transform each other, creating and annihilating other magicians with ease. As in an agent system, there may be a geometry to a mind magician system -- meaning that each magician form/process can only interact with "nearby" magician form/processes at any given time. But this is merely a minor complication: in this case, everything is acting on its neighbors, transforming them into new things that will then act on their neighbors in different ways.
In the psynet model, the mind's magicians are constantly casting spells on other nearby magicians, with the result that the identity of each magician is constantly changing. However, there are certain patterns amidst this endless flux of shifting existence -- and these regularities, these emergent solidities amidst the chaos of magician intercreation, are themselves recognized by clever magicians, who are scanning their environment for patterns. The patterns of magician creation become in turn new magicians -- this is the "twist" at the core of thought, the essential trick of the mind. The fact that most of the magicians are recognizing patterns is essential: in the psynet model, the patterns of the mind form from the synergetic activities of a group of inter-transforming, pattern-recognizing agents.
But how does this fluctuating mass of magician form/processes give rise to anything coherent, permanent, real -- thoughts, feelings, selves, perceptions, actions,.... There is no special trick here: merely the fact of interproduction. One can have magician systems which sustain themselves, in that each magician in the system is created by one or more of the other magicians in the system. Such systems are called autopoietic -- they represent order emerging from flux; the hallmark of complexity.
For instance, suppose A, B and C are all mental forms. Then, A can cast a spell that changes B into A, while B casts a spell that changes A into B -- the two will keep producing each other, cooperatively, ad infinitum. Or, perhaps, A and B can combine to produce C, while B and C combine to produce A, and A and C combine to produce C. The number of possible systems of this sort is truly incomprehensible. But the point is that, if a system of magicians is mutually interproducing in this way, then it is likely to survive the continual flux of magician interaction dynamics. Even though each magician will quickly perish, it will just as quickly be re-created by its co-conspirators. Autopoiesis creates self-perpetuating order amidst creative chaos.
In a slightly different language, a self-producing magician system which is robust with respect to fluctuations is an attractor of mental dynamics. The notion of "attractor" comes from chaos theory as applied to physics, but it applies to psychological dynamics just as well. An attractor is simply a characteristic behavior of a system -- a pattern of behavior that seems to pop up again and again, in a huge variety of situations. Magician systems can have complex, intricate attractors, like any other dynamical systems -- attractors as subtle and intricate as the Mandelbrot set, but existing in such high-dimensional spaces that visualization is impossible. In general, these will be "probabilistic attractors," rather than the definite, deterministic kind, but this is just a technical point. The key point is that, in the psynet view, all the apparently "solid" structures of mind - - memory and perceptual systems, and so forth - - are actually probabilistic attractors of magician dynamics. What appears to be definitely there is actually a dynamic process, dying and then regenerating itself every instant. Being is generated out of Becoming.
There is a great variety of possible autopoietic magician systems. And so, hypothetically, intelligent magician systems might be structured in any number of possible ways. The psynet model makes the claim, however, that there are certain abstract structures of intelligence common to all, or at least very many intelligent systems. The attractors of the mind spontaneously self-organize into larger autopoietic superstructures.
Many different such superstructures may exist. Perhaps the most important one, however, is the structure called the dual network. The dual network is a network of pattern/processes that is simultaneously structured in two ways. It is structured hierarchically, so that simple structures build up to form more complex structures, which build up to form yet more complex structures, and so forth; and the more complex structures explicitly or implicitly govern the formation of their component structures. And it is structured heterarchically: different structures connect to those other structures which are related to them by a sufficient number of pattern/processes. Psychologically speaking, the hierarchical network may be identified with command-structured perception/control, and the heterarchical network may be identified with associatively structured memory.
What I mean by a "mind network" or psynet, then, is a magician system which has evolved into a dual network structure. Or, to place the emphasis on structure rather than dynamics, it is a dual network whose component processes are magicians. The central idea of the psynet model is that the psynet is necessary and sufficient for mind. "Psynet" is just a shorthand term for "mind magician system."
At first glance the dual network may seem an extremely abstract structure, unrelated to concrete psychological facts. But a bit of reflection reveals that the hierarchical and heterarchical networks are ubiquitous in theoretical psychology and cognitive neuroscience. They are part of the human brain/mind; they are not part of the existing Web to any significant extent; but they must be made part of the Web, if the Web is to become intelligent.
Hierarchy is visible throughout the brain, most notably in the visual cortex. Neuroscientists have charted in detail how line processing structures build up to yield shape processing structures, which build up to yield scene processing structures, and so forth. The same is true of the study of motor control: a general idea of throwing a ball translates into specific plans of motion for different body parts, which translates into detailed commands for individual muscles. It seems quite clear that there is a perceptual/motor hierarchy in action in the human brain.
This hierarchical structure is the basis of the idea of subsumption architecture in robotics, pioneered by Rodney Brooks at MIT. In this approach, one begins by constructing low-level modules that can carry simple perceptual and motor tasks, and only then constructs modules residing on the next level up in the hierarchy, which loosely regulate the actions of the low-level modules. The perceptual-motor hierarchy is created from the bottom up. Another example, on a more abstract level, is the role of psychological hierarchy in personality structure and dynamics.
On the other hand, the heterarchical structure of mind is seen most vividly in the study of memory. The associativity of human long-term memory is well-demonstrated, and has been simulated by many different mathematical models, most notably neural networks such as those developed by John Hopfield and Teuvo Kohonen. The various associative links between items stored in memory form a kind of sprawling network. The kinds of associations involved are extremely various, but what can be said in general is that, if two things are associated in the memory, then there is some other mental process which sees a pattern connecting them. In fact, this idea of associative memory played a crucial role in the original conception of the idea of hypertext -- way back in the 1950's, when Vannevar Bush wrote his landmark article As We May Think.
The key idea of the dual network is that the network of memory associations (heterarchical network) is also used for perception and control (hierarchical network). As a first approximation, one may say that perception involves primarily the passing of information up the hierarchy, action involves primarily the passing of information down the hierarchy, and memory access involves primarily exploiting the associative links, i.e. the heterarchical network.
In order that an associative, heterarchical network can be so closely aligned with an hierarchical network, it is necessary that the associative network be structured into different levels clusters -- clusters of processes, clusters of clusters of processes, and so on. This is what I have, in The Evolving Mind, called the "fractal structure of mind" - although, of course, one does not really have an infinite hierarchy as in mathematical fractals; but rather a finite hierarchy as in fractal pictures viewed on the computer screen. If one knew the statistics of the hierarchical network of a given mind, the fractal dimension of this cluster hierarchy could be accurately estimated. The Australian psychologist David Alexander has argued for the neurobiological relevance of this type of fractal structure, and he has constructed a number of interesting neural network simulations using this type of network geometry, including an interesting simulation of artificial bugs evolving strategies to acquire food in an artificial world
It must be emphasized that neither the hierarchical network nor the heterarchical network is a static entity; both are constantly evolving within themselves, and the two are constantly coevolving together. This is the fluctuating, self-adaptive nature of mind. In particular, two kinds of dual network dynamics may be distinguished: evolutionary and autopoietic. This distinction may be blurry in certain cases, but it is a useful one. It is a psychological correlate of the biological distinction between evolution and ecology.
Regarding evolutionary dynamics, in The Evolving Mind I have hypothesized that the brain's hierarchical network evolves new programs by genetic programming, i.e., by the mutation and crossover of subnetworks to form new subnetworks. And in The Structure of Intelligence I have hypothesized that that the heterarchical network improves and maintains its associativity by moving the positions of processes and clusters of processes, to see if the new positions give better associativity than the old. The beauty of the dual network is that these two dynamics mesh naturally together: every time programs are crossed over, clusters are moved; and every time clusters are moved, programs are crossed over. The moving of subnetworks to new locations is a very simple dynamic which is meaningful both hierarchically and heterarchically -- the value of a "move" may be judged simultaneously from the point of view of perception/action (hierarchy) and the point of view of memory (heterarchy). This leads to some psychologically interesting situations. One can imagine cases where a move is suppressed because it is advantageous from the point of view of one of the networks, but disadvantagous from the point of view on the other one; and one can imagine other cases where moves are made with extreme vigor because they advantageous for both networks.
Evolution explains the origination of new forms; autopoiesis, understood as stable inter-creation in a mind-geometric context, explains the persistence of forms. In order to achieve the full psynet model, one must envision the dual network, not simply as an hierarchy/heterarchy of mental processes, but as an evolving hierarchy/heterarchy of autopoietic process systems, where each such system is considered to consist of a "cluster" of associatively related ideas/processes. A healthy mental system must evolve process systems that balance the two types of dynamics. Excessive focus on autopoiesis leads to an unproductive, insular mental system; excessive focus on external fitness leads to a noncreative, environment-driven mental system.
To some, this whole approach may seem to embody an overly reductive notion of "mind." What about consciousness, awareness, inner experience? In Chaotic Logic I have shown that it is possible to account for many phenomena regarding attention and awareness in terms of the psynet model. In the end, however, the model is not intended to account for the basic quality of experience. My perspective on these issues is similar to that taken by the quantum physicist Nick Herbert, in his book Elemental Mind. I believe that awareness is everywhere, embodied in everything, and what we are doing with our formal models of the mind is merely unravelling the particular ways awareness manifests itself in highly intelligent systems, such as the human organism, or the emerging World Wide Brain. The psynet model is a model of the structure of intelligence -- a structure which mediates but does not constitute an intelligent system's flow of experience.
Even if all entities are conscious, however, it may still be said that some entities are more conscious than others. I am more conscious than a rock, more conscious than a snail, more conscious than Ronald Reagan. And operationally, in the psynet model, I assume that some magicians can possess a greater "force of consciousness" than others; and that this greater force is represented by an increased influx of randomness in the interactions of the magician. In intelligent systems, there are particular patterns which tend to be associated with powerful consciousness. One of these is a circuit joining perceptual and memory processes, called the Perceptual- Cognitive Loop. When we focus our attention on something, attaining a peak of consciousness, we are sending information rapidly around in circuits from our cognitive centers to our perceptual centers. In the process we are making vaguely constructed percepts or concepts more definite, more crisp, more concrete -- more real.
And this focussed consciousness is important for, among other things, the emergence of the mental structure called self. A self- system is a subnetwork of the dual network, which is a miniature "dual network" in itself, incorporating perception, action, memory, and numerous Perceptual- Cognitive Loops. It is a dynamic, creative repository of informaiton regarding the system as a whole, and its relation with the outside world. Furthermore, self-systems are usually divided into multiple subselves, representing coherent packages of behaviors, beliefs and memories appropriate to particular types of situations. If the self is a strange attractor, subselves are the multiple lobes of the attractor; multiple personality is the case where the lobes split off and become virtually disconnected from each other.
I have only sketched the barest outline of the psynet model here; a full treatment would be too much of a digression. The key point is that the psynet model gives us, perhaps for the first time, a truly general way of thinking about the mind -- a scientific framework for modelling mind that applies to both human and digital systems. It is an abstract framework, but it has to be, otherwise it could not deal with systems as diverse as human brains, robots, and World Wide Webs. The general framework given by the psynet model can be used for analyzing existing intelligent systems -- or for designing new ones. If the scientists in Stanislaw Lem's novel Solaris, who encountered an intelligent ocean, had known about the psynet model, they could have used it to help understand what they were seeing, and maybe even whipped up an intelligent ocean of their own!
Being so general, the psynet model can never tell a complete story about any particular intelligent system. For example, in the case of the human mind, it says nothing about the mental consequences of the differences between the left and right halves of the brain. For, after all, hemispheric asymmetry is not a general property of intelligence; it is just a fluky consequence of the human body's flawed bilateral symmetry. In the case of intelligent Web systems, there are certain peculiarities that arise from the distinction between human-created pages and links and machine-created pages and links -- again, these are specific to one type of intelligent system, and cannot be dealt with by a general theory. The psynet model, on its own, rarely gives enough information to solve a particular problem regarding a particular intelligent system. But it gives one a place to start.
Most of the work I have done with the psynet model has been in the areas of philosophy of mind and theoretical psychology: simply trying to show that the model accounts for all the data that scientists have accumulated regarding the workings of the mind. My training is in mathematics: I am a theorist. Over the last year, however, I have become more interested in applications, and I have used the psynet model to construct a fairly specific design for Web intelligence, which I call WebMind.
The WebMind design represents one particular approach to the general question: how can the universal structures of mind be gotten to emerge from a massive, globally distributed system of interlinked documents? I personally believe that WebMind is the best approach, and that WebMind will turn out to be the design of the global Web intelligence of the future. Even if this does not turn out to be the case, however, the WebMind design has theoretical value. It proves that it is at least possible to construct a Web system displaying true intelligence, according to the criteria of the psynet model of mind (which is the only truly general theory of mind that we have).
The design and implementation details of WebMind are obviously complex, and I will avoid most of them here. A prototype WebMind will be available for public viewing on my Website in mid-1997. The point, for now, is merely to illustrate the basic concepts of WebMind: to give an idea of what, specifically, it might mean for the Web to become an intelligent system.
So, how to weave an intelligent Web, using the psynet model of mind?
We must begin with magicians -- the intercreating form/processes that, according to the psynet model, are the basic stuff of mind. In the brain , mind magicians are implemented as neural modules -- densely interconnected webs of neurons. But what is a "magician" in the context of the Web -- in an intelligent system whose components are pages and links rather than neurons and neurotransmitters?
In the WebMind design it is assumed that magicians, rather than representing neural modules as in the brain, represent Web pages or emergent patterns among Web pages. These various magicians, crystallizing aspects of Web content, are connected to each other by links, and each magician interacts directly only with the other magician it is linked to.
It follows from this that, in WebMind, Web pages and patterns amongst them are dynamic rather than static entities. They are not just carriers of content, they are agents that control their own destiny, and contribute creatively to the emergent structures of the intelligent system to which they belong. This is a radical shift in point of view from the Web as it currently exists, but it is a necessary one. In a mind, there is no room for "deadwood"; everything must be allowed to act, to contribute its own localized intelligence and subjective perspective to the emergent parallel intelligence of the whole.
A Web page, in this view, is conceived as a static body of text, images and sounds, and a dynamic body of links: WebMind has to do with the way the dynamic bodies of Web pages interact with one anothers. Human-created links exist alongside automatically-created links or pattern links. The human-created links represent humanly-perceived and identified relations between pages, and the weighted, pattern links represent meaningful relations between pages or categories detected by WebMind itself. The magician activity of WebMind centers around the dynamical behavior of the network of pattern links.
The dual network, with its linked hierarchical and heterarchical structures, fits in very naturally here. The heterarchical structure of the dual network just means that pages and patterns should be connected to other pages and patterns that have related content: this is what the current hyperlink-structure of the Web attempts to achieve, and what the internal connection structure of an intelligent Web system will achieve far better. The hierarchical structure, on the other hand, is a structure of patterns emergent from patterns emergent from patterns. A simple kind of hierarchical WebMind structure would be a self-organizing version, hierarchical category structure similar to that found in a tree index like Yahoo: on the lowest level, magicians correspond to Web pages; on higher levels, nodes correspond to more and more abstract categories of Web pages. Category structure, however, is only the simplest kind of emergent Web pattern; and the structure of a WebMind will go far beyond what a system like Yahoo can recognize.
One can view each magician within a WebMind as possessing a certain amount of "activation," as in a neural network. Activation spreads through the links of the network, creating loops of attentional focus as in neural network models. The subtler aspect of the dynamics of a WebMind, however, is the creation of links themselves. Rather than being formed by a particular, clever algorithmic process, these must be formed by spontaneous self-organization. When a new page is entered into a WebMind, or a new pattern is recognized, it must be moved around until it "finds its place" in the network. And, as new pages are being entered into the system, old pages are periodically allowed to move around as well, updating their own positions. Whenever a magician has particularly high activation, this is a clue that it is important, and that its link structure should be paid particular attention -- it should be granted an extra amount of latitude in its movements, so as to optimize its connections.
This type of system, because it is constructed on the basis of a comprehensive model of mind, contains versions of every important psychological phenomenon.
For instance, the analogue of perception in WebMind is the entry of a new Web page into the network. When a new entity is placed into the network, it immediately sets about finding its "place" in the network. Then, once the place is found (at least approximately; nothing is ever completely "settled in" WebMind, or any mind), the process of spreading activation begins. Activation spreads through the network, stimulating other magicians. Magicians with high activation are stimulated to search for new links. Once the process is done, the pages "nearby" to the newly perceived page should be its natural relatives in terms of content.
Next, action in the WebMind system is by exchanging information -- by sending a message out, to some other system with which it is interfacing, consisting of a collection of Web page magicians. And cognition, as discussed above, consists of the creative, meaning-seeking movements of magicians through the network, combined with the patterns of spreading activation. Links are constantly being created, destroyed and modified; and activation is constantly spreading along these links, in turn affecting the pattern of link modification.
One can even talk about attention -- focused consciousness -- in WebMind. As I have said above, I believe that the question of whether digital systems can be conscious is a useless one. One can, however, talk about different systems deploying their consciousness in different ways; and one of the ways intelligent systems deploy their consciousness is by focussing it on specific things, i.e. by attention. Attention in WebMind, like attention in the human brain, follows the pattern of the a Perceptual-Cognitive Loop: feedback between the perceptual (Web page magician) and cognitive (category magician) levels causes a nexus of activity (high activation and hence high link update activity), and leads to extremely robust pattern-recognition schemes. As in the human mind, randomness and creativity conspire to create rigid structures: consciousness creates subjective reality!
If the Web as a whole were to adopt the WebMind system as a kind of "global Web operating system," it would become vastly more intelligent. Instead of a World Wide Web, we would have the first stages of a global Web mind. However, the Web, by its very nature, is not under anyone's control: there is no way to enforce intelligence on it from above. If one is to influence its evolution toward intelligence, one must be subtler. One must use complexity, parallelism and self-organization to seduce the Web toward greater intelligence.
Toward this end, let us envision individual Websites implementing a system like WebMind; let us call these intelligent sites MindSites. The most practical path toward a true WorldWideBrain, I suggest, is to have a worldwide network of MindSites, all in constant communication with one another. The key here is in the interactions between the different MindSites: each one is to be viewed as part of a distributed WebMind, and so the communications between different sites must reflect the communications between processes within a single WebMind. Activation must spread from magicians within one MindSite to magicians within another; and magicians must be able to actively, efficiently search for links at other sites, as well as at their own. In this way, one has a WebMind containing a huge amount of knowledge in its memory, and with a huge amount of processing power at its disposal as well. One has the potential for the emergence of a truly new kind of intelligence -- something with tremendous importance for the Web, for science, and for humanity in general.
The MindSite implementation of WebMind leads one to some intriguing questions in digital psychology. For example, one question that has occupied me for quite some time is: In what sense might a MindSite, or a worldwide network of MindSites, develop a self?
The key to answering this question, I believe, lies in the way in which different MindSites interact with each other -- and model each other. To explain this I will have to go beyond the architecture of WebMind itself and explain a little bit about the specific design of MindSite -- i.e. how WebMind might be implemented at a particular site in such a way as to enable communication with WebMinds at other sites.
Each MindSite contains a certain store of information, and a collection of links to other pages and other sites. In particular, it knows of other MindSites which contain related information. When it seeks to update the links contained in one of its pages, it extends its search process outside the pages in its own domain, to pages in the domain of these other, related sites.
In judging which other sites to search in which situations, a MindSite must make use of "simplified models" of other sites. Rather than containing a complete mirror of the University of South Australia MindSite, for example, the University of Western Australia MindSite would instead contain a kind of abstract "index" summarizing the type of information contained at the University of South Australia MindSite. This index could be of the same type that WebMind uses to capture the semantic structure of individual Web pages, or categories of Web pages. This is similar to the way no person carries a complete mirror of another person's mind, but merely a kind of simplified image of the other person's mind, containing information necessary for their own purposes.
Next, a MindSite can also infer another MindSite's model of itself. This can be done by keeping a record of all the pages sent to it by each other MindSite. Suppose site A keeps a record of all pages sent to it by site B. Site A then asks how the "average" structure of these pages differs from the structure of the site B itself (insofar as this structure is known by site B). This difference is Site A's image of Site B's image of Site A. In this way each MindSite can keep track of how it appears to each other one.
The process is the same if B is a human user, or a collection of human users, instead of another site. The new pages inserted into the site by humans, or the queries inserted into the site by humans, represent an implicit model of B according to A.
What use does it do MindSite A to have an image of B's image of itself? With this information, it is able to process input from B more speedily. It knows what kinds of information B is most likely to request -- what categories B's requests are most likely to fall into. Based on this, it can bias the behavior of B's magicians as they search the magicians in A for relevant link. Just as, knowing that a certain person is interested in rock music but not computers, one will tend to talk to that person about Sonic Youth rather than TCP/IP network routing.
The overall "self-image" of MindSite A, then, consists of the composite of all the different versions of MindSite A that it is aware of: of A according to A, A according to B, A according to C, and so on. This is the sum total of what A knows about itself: how it presents to itself, and how it presents to others.
Each of these different "versions of A" leads, potentially, to a different kind of behavior within A when presented to a stimulus. In practice, though, it would be extremely cumbersome to have a different mode of behavior for each other site with which MindSite A interacts. It is necessary to lump the different "versions of A" together, if not into just one overall self-index, then at least into a small number. However, this lumping process need not be done in an arbitrary manner. I believe that, in most cases, these different "versions of A" are likely to cluster, leading to not hundreds but perhaps dozens or just a handful of significantly different "composite versions of A." These "composite versions of A," each leading to its own mode of behavior, are A's subselves.
For example, my own Website consists of a large amount of academic writing pertaining to my theories of the mind, a large amount of science fiction I have written, and also some pictures and essays regarding the actress Julie Delpy (on whom, being as yet a mere carbon-based life-form and subject to various hormonal peculiarities, I have had a crush for years!). The site is accessed by a variety of different people: mostly academics concerned with mind modelling or related ideas, but also a fair number of Julie Delpy fans, and a few aficionados of unusual literary science fiction. If these were MindSites rather than individuals accessing the site, one would expect the same breakdown. The image of my site according to the Society for Chaos Theory in Psychology site, or the UCSD Cognitive Science site, etc., would focus on the scientific content, and would be one cluster. The image of my site according to the Julie Delpy Fandom HQ site, or the Julie Delpy Page site, etc., would be quite different. The image of my site according to sites concerned with fiction would be different yet again.
These different "clusters" of images of my site, as detected by my own site's MindSite program, would be the different "subselves" of my site, corresponding to the different ways that the site presents itself to the Web in general. Each subself corresponds to a particular way of processing input: of directing the incoming queries from other MindSites. In other words, each subself biases perception, action and cognition, the restructuring of memory, and the focus of attention.
In this perspective, the fragmentation of a MindSite's self into subselves is a matter of computational convenience, rendered possible by the fragmentation of human knowledge itself. Clearly-demarcated subselves will only exist where the subject matter of a site divides into clearly separate categories, or where the subject matter of a site falls under a number of clearly different interpretations (which are taken up by different, other Websites).
I have been speaking about the self-system of an individual MindSite. The next question is: how does this relate to the overall self-system of the global Web mind? Does the global Web mind have a self?
At first one might note that the Web as a whole does not have any peers to relate to; one might conjecture that, because of this lack of social interaction, it may not be able to develop a fully sophisticated self-system. On the other hand, however, there is the fact that the Web does interact with humanity, and in this regard it may indeed be considered to have some "social" interaction: interaction with the "mind" of society. Instead of right and left hemispheres, one may see the evolution of a global brain divided into a Web "virtual lobe" and a human-society "virtual lobe."
Certainly, at any rate, we can see how large subnetworks of the Web might develop self-systems, just as clearly as individual Websites. The distinction between a society and a mind, which is rather clear in a human context, is somewhat fuzzy in the context of Web intelligence. One is looking at a multilayered hierarchy of intelligent systems, each one with its own self-system, overlapping but distinct from the self-systems of the intelligent systems within it and containing it.
One can envision many variations of the WebMind design. For instance, if natural language processing becomes a reality, one may see intelligent Web agents that actually write new Web pages, or rewrite old ones. In this case the WebMind design as described here will be significantly extended: Web pages will have no inert bodies, they will be entirely dynamic, entirely subject to the fluctuating self-organization of the system as a whole.
The key points, however, should be clear through the details. If the Web is to become a mind, the components of the Web must be made autonomous pattern-recognition agents, with the ability to transform one another. Furthermore, the Web as a whole must be structured in a simultaneously hierarchical and heterarchical way. This structure must not be static, like a Yahoo category tree, but dynamic -- wrapped up with the self-organizing link-forming dynamics of the individual parts of the system. Any system of this type, whether it follows the WebMind design or something different, will give rise to perception, action and cognition. Potentially, if the emergent systemic structures work out correctly (as is strongly encouraged by the WebMind design), attention-focussing feedback circuits and robust, distributed self-systems will emerge.
Just as the abstract structures of the human mind emerge from webs of neural modules, so the abstract structures of the global Web mind will emerge from networks of Web pages and intelligent Web agents. The substrate will be different, but the overall emergent structures will be the same, and so will be the general dynamics by which the global structures come out of the component parts.
Philosophy of the Global Brain
The technical details of intelligent Web systems are intriguing and important, but upon exploring them too deeply, one runs the risk of losing the forest for the trees. It is also important to think about the long-term future of global Web intelligence. What starts out with clever search engines and webservers, may end up with something truly cosmic and amazing -- a new phase in the evolution of life and intelligence. Alongside Web mind engineering, there is the emerging and exciting area of Web mind philosophy.
A WebMind system, constructed as I have proposed here or in some other way, would be an independent intelligent entity on its own, interacting with humans, but fundamentally separate from them. This is what I call Phase One of the global Web mind; and it will be, in itself, an incredibly exciting development. It will be our first opportunity ever to interact with an highly intelligent nonhuman being. And it will be an opportunity to understand ourselves more deeply, by seeing the subtle patterns of our own collective mind come to life.
Yet more dramatic, however, will be Phase Two of the global Web mind, in which the boundary between ourselves and our creation will be crossed. It has been said before, but it must be said again and again until it sinks in: we humans are on a nearly irreversible course toward digital existence. And if there is a global Web mind, then digital existence will mean some kind of close interaction with this entity. Ultimately, it seems almost certain, individual humans will merge in some way with the human-pattern-based alien intelligence that is the global Web mind.
In fact, the idea of a global, emergent intelligence, incorporating all of human society, is not at all a new one, and is not tied to the World Wide Web. Science fiction is replete with stories of "hive mind" societies, and of devices which transform human society into such an entity; and a few scientists have seriously explored the idea as well. Looking backwards, one sees an idea that has been gaining momentum -- emerging from the collective unconscious, if you will -- since the middle of the century. The global Web mind is just one particular manifestation of the "global brain" archetype. What distinguishes the global Web mind from previous speculations about emergent global intelligence, however, is its concreteness. It transforms a vague, formless philosophical possibility into something quite technologically plausible, something with definite, quantifiable characteristics.
I myself first encountered the notion of a "global brain" in the early 1980's, when, browsing in a bookstore, I happened upon Peter Russell's book The Global Brain Awakens (see http://artfolio.com/pete/GBA.html). Assigning computer and communications technology only a minor role, Russell argued that human society was reaching a critical threshold of size and complexity, beyond which it would enter the realm of true intelligence, and human consciousness would give rise to a higher level of collective consciousness. Russell's hypothesized supra-human intelligence might be called a "global societal mind," as distinct from the global Web mind that is my central topic of interest here. Both the global societal mind and the global Web mind are specific manifestations of the general concept of a "global brain" -- an emergent, distributed worldwide intelligence. And in fact, these two types of hypothetical global brain are closely related.
Russell ties the global brain in with new-age, consciousness-raising practices. By meditating and otherwise increasing our level of individual consciousness, he suggests, we bring the awakening of the global brain closer and closer. When there is enough total awareness in the overall system of humanity, humanity itself will lock into a new system of organization, and will become an autonomous, self-steering, brainlike system.
When I first read The Global Brain Awakens, I thought the ideas fascinating but rather extravagant, and placed them in the category of high-quality science fiction: fun to think about, but not immediately relevant. Years later, however, when the idea of a global Web mind occurred to me, I naturally remembered Russell's concept, and I wondered whether the two ideas were related. Perhaps, it occurred to me one morning -- in a rather speculative mood -- perhaps the Web was simply the medium which society was using to re-mold itself as an intelligent, autonomous system! After much thought and many conversations on the topic, with Russell and others, I have concluded that the two ideas are indeed closely connected. My global Web brain and Russell's global societal mind are two different flavors of the same basic concept. The global societal mind, I believe, will come along with Phase Two of the global Web mind.
More specifically, one can envision the global Web mind as leading to a global societal mind a la Russell in two different ways.
First, we might actually become part of the Web in a physical sense. This could be accomplished either by downloading ourselves into computers, by the fabled "cranial jack," or by some kind of true VR interface. Or it could be done by incorporating our existing bodies into the Web, via newfangled sensory and motor devices. Imagine brains as Websites, and modem/cell-phones inserted directly into the sensory cortex!
Or, secondly, we might become part of the Web via our actions, without any extra physical connections. This is already coming about, at least among certain sectors of the population. As more and more of our leisure and work activities are carried out via the Internet, more and more of our patterns of behavior become part of the potential memory of the global Web mind. A global Web mind could strongly influence our behaviors, merely by influencing the computer enviroments in which we spent most of our time.
The global Web mind and the global societal mind, then, are not really such different things at all. If a global Web mind comes about, it will clearly link humans together in a new way, leading to some kind of different and more intelligently structured social order. This is one flavor of global brain. On the other hand, if a global societal mind comes about, communications technology such as the Internet will doubtless play a huge role in its formation. This is another flavor of global brain. The question is, will there be an intelligent Web interacting with humans in a subtle way, or will there be an intelligent societal system incorporating the Web, human beings, and all their interactions. What kind of global brain will we actually see?
My personal belief is that the more radical option will come about: humans eventually will fuse physically with the Web, bringing the Web mind and the societal mind together in a very powerful way, and rendering the subtler questions raised by indirect human/global-Web-mind interaction irrelevant. But we do not yet have the technology for this, so this is just an optimistic speculation. The global Web mind in itself, on the other hand, is far less speculative -- it is well within reach of current technology to engineer a global Web mind; and the time is not far off when such a thing may emerge spontaneously, in spite of our collective lack of effort to bring it about.
Russell's book was my introduction to the global brain, but in fact, the concept of a global superorganism has been presented independently by a number of thinkers of different nationalities, over the past few decades.
Joel de Rosnay, for one, has published several books in French on the notion of a "cybionte" or cybernetic superorganism. His earliest, Le Macroscope, was published in 1975; L'Homme Symbionte, which appeared in 1996, updates the concept with discussions of chaos theory, multimedia technology and other new developments (see http://www.quebecscience.qc.ca/derosnay.html").
And Valentin Turchin, in his 1977 book The Phenomenon of Science (see http://www.amazon.com/exec/obidos/ISBN=0231039832/principiacyberneA/ ), has laid out an abstract, cybernetic theory of evolution, and used it to discuss the concept of an emerging, meta-human "superbeing." His crucial concept is the metasystem transition, a word for the phenomenon in which was previously was a whole, suddenly becomes a part. For example, the cell, which has its own systemic unity, its own wholeness, becomes a part when it becomes part of the human organism. There is a metasystem transition between cells and organisms. There is also a metasystem transition between computers and networks: my PC at home is a natural whole, but the networked PC of the year 2000 will be something quite different, in that most of its software will require interaction with outside computers, and most of its behaviors will be incomprehensible without taking into account the network outside it.
Currently humans are whole systems, with their own autonomy and intelligence, and human societies display a far lesser degree of organization and self-steering behavior. But, according to Turchin, a transition is coming, and in the future there will be more and more intelligent memory, perception and action taking place on the level of society as a whole. Turchin's vision is one of progressive evolution: as time goes on, one metasystem transition after another occurs, passing control on to higher and higher levels.
One of Turchin's most active contemporary followers is Francis Heylighen, of the Free University of Brussels. Heylighen believes that the Web will be the instrument that brings about the meta-system transition, leading from humanity to the meta-human superorganism. The Principia Cybernetica Website ( Href="http://pespmcl.vub.ac.be/">http://pespcml.vub.ac.be), which he administers, contains an extensive network of pages devoted to super-organisms, meta-system transitions, global brains, and related ideas. Together with his colleague John Bollen, he has also experimented with ways of making the Web more intelligent, by making its link structure adaptive, in the manner of a neural network.
Heylighen has done a comprehensive world-wide search for literature on the global brain, and posted the results at Principia Cybernetica.
Recently, Heylighen has assembled an e-mail "Global Brain Study Group" mailing list (see http://pespmcl.vub.ac.be/GBRAIN-L.html. for details). Membership on the mailing list is restricted to those individuals who have published something (on paper or on-line) on the notion of emerging Web intelligence. I qualified as a member of the group only narrowly, by virtue of having posted a rough-draft article called "WorldWideBrain: Using the WorldWideWeb to Implement Globally Distributed Cognition" on my Website.
Though I am generally a reluctant participant in e-mail mailing lists (how much precious time they can absorb!), I signed up for this one with glee. So far the discussion has been interesting, although no dramatically new ideas have popped up. (Note that this discussion of the e-mail study group is current at the time of writing, but will doubtless be obsolete by the time these words are published. A complete record of the dialogue may be found at http://www.fmb.mmu.ac.uk:80/majordom/gbrain/)
At first I viewed the mailing list as an arena in which to work out ideas regarding WebMind, the intelligent Web software product I was in the initial stages of developing at the time. But it soon became apparent to me that, outside of Heylighen, his collaborator Johan Bollen, and myself, none of the group had seriously thought about what could be done on a technical level to make the Web more intelligent. The near-consensus of the group seemed to be that the Web was already on a trajectory toward intelligence, solely on the basis of input from commercial software developers, with no input from global-Web-mind-oriented theorists needed. Heylighen's and Bollen's ideas on global-Web-mind-oriented technology development are relatively simple, modelled on neural network theory; whereas I favor a more radical and complex approach based on my work in theoretical cognitive science (as in the WebMind model described above). But so far, rather than debating the merits of different approaches to making the Web intelligent, the discussion group seems inevitably to veer toward the philosophical -- toward the questions of what the global Web brain will be like, and how it will relate to human beings, individually and collectively.
The most striking thing about the discussion on the Global Brain Study Group list is not a presence but an absence -- the absence of serious disagreement on most issues regarding emerging Web intelligence. Everyone who has thought seriously about the global Web brain, it seems, has come to largely the same conclusions. The Web will become more self-organizing, more complex, and eventually the greater intelligence of its parts will lead to a greater intelligence of the whole. Human beings will be drawn into the Web intelligence one way or another, either by mind-downloading or virtual reality, or simply by continual interaction with Web-brain-savvy technology. In this way, human society will begin to act in a more structured way -- in a way directed by the global Web mind, which recognizes subtle emergent patterns in human knowledge, and creates new information on the basis of these patterns.
The relatively abstract focus of the Global Brain Study Group reflects, it seems clear, the collective mind-space of the set of individuals who have, at the present time, published on global Web intelligence. These are not computer hackers; they are rather, like myself, philosopher-scientists who are also power computer users. And, on reflection, this is not surprising. It seems that, in order to really see the future of something, a combination of detachment and involvement is ideal. Too detached, and one has no idea what is going on; one's prophecies will be disconnected from reality. Too involved, and one can't see the forest for the trees; one is apt to get caught up in short-term predictions, based on the micro-trends that one knows so well.
Anyhow, as Heylighen's extensive literature search makes clear, the Unix network hackers of the world have not yet turned onto the idea of global Web intelligence, and neither have the majority of ordinary computer users. I suspect, however, that this situation will change drastically over the next few years; and certainly over the next decade. At the time I am writing this, there is not yet a Usenet newsgroup called comp.global.brain or anything similar; but by the time you are reading this, there may well be. I expect that, in a decade's time, the global brain will be a staple of the popular computer press.
Up to this point, however, the most notable appearance of the global brain in the popular press has been a satire -- a brief piece by David Williams, which appeared in Wired in mid-1996, called The Human Macro-Organism as Fungus (see http://www.hotwired.com/wired/4.04/features/viermenhouk.html). This article is an interview with a fictitious scientist named Dr. Viermenhouk, who parodies Heyligen by taking the absurdist line that the global superorganism is already here. I will reproduce the final part of the interview here:
The Internet provides a big leap forward. As an organism grows more complex, it requires a sophisticated means of transferring data between its constituent entities. The Internet is little more than the nervous system of our human macro- organism.
Isn't your work derivative of other cybertheoreticians? Francis Heylighen, for example, has postulated the technology- driven transformation of humanity into a "super- being" or a "metabeing." Heylighen ...
... walks around all day with a printer port up his ass. I've seen the pictures. He's obsessed with a direct neural interface. His concept of a metabeing, a single unitary organism, hinges on us physically plugging into a "super- brain." He's missing the point. We already have.
Cells don't communicate through direct physical connections; they use electrical interfaces. The neural cells in our skulls communicate through an intricate chemical dance. To expect a macro- organism to develop differently from a multicellular organism is foolish.
Now that we monkeys are part of a greater being, the connection we share is through symbol. Human language, with all of its limitations, is sufficiently complex to support the information- transfer needs of an organ- ism never seen before on Earth. You don't need wires up your butt. Just look at the symbols on your screen. Click on that hypertext link. Send that email. Be a good little cell.
And Heylighen's bizarre notion that this metabeing is an improvement; delusion! Individual humans are intriguing, sensual, spiritual creatures. The human macro- organism is more of a fungus. A big, appallingly stupid fungus. It only knows how to eat and grow, and when all of the food is gone, it will die. It has all the charm and wit of something growing in a dark corner of your basement. Adds a whole new dimension to the concept of human culture.
But what of individuality?
Humans are already too specialized to survive outside of their host organism. Pull a nerve cell from the brain and put it on the ground; within minutes it's a tiny gray blob of snot. Pull Bill Gates out of his office and put him in the veldt; in four days he's a bloated corpse in the sun. With the exception of a few madmen and anarchists, most of us can't even feed ourselves anymore; or communicate outside of our specialized fields. Put an anthropologist, a supply- side economist, and a mechanic in the same room. What the hell will they talk about? O. J. Simpson?
David Williams' notion of the superorganism as a fungus is humorous, but it also conceals a serious point. Yes, the fictitious Dr. Viermenhouk is wrong; the superorganism is not here yet, at least not in full force. But when it is here, will it necessarily be a boon to humanity? Or will it, indeed, be a fungus, a parasite on man, sucking the life-blood from human-created technology for its own purposes?
Heylighen himself appears to have taken the parody in good cheer But not all global brain advocates have been so charitable. Valentin Turchin, for one, was deeply annoyed. In a message posted to the Global Brain Study Group, he made the following remarks:
Wired's interview with "Dr.Viermenhouk" which Francis calls a parody and fun, is rather a lampoon, in my view.
The only "fun" in the interview is the vulgar language, which allows the author to convey his negative emotion. I think that he is genuinely outraged by the very idea of direct (not through language) brain links. And he is speaking, I believe, for the majority.
The fact that we take this idea seriously, and explore its significance and various aspects, will upset most people. We must be prepared for this. I have already had some experiences of this kind....
Turchin believes that the global brain will have deep, positive, profound human meaning. That it will provide a way of bridging the gaps between human beings and fusing us into a collective awareness -- something that spiritual traditions have been working on for a long time. From this point of view, direct brain-computer links should not be viewed as tools for escape from human reality, but rather as gateways to deeper connection with other human beings. And, from this point of view, Williams' remarks are destructive, pointing the readers of Wired away from something genuinely valuable -- they are about as funny as someone going into schools and teaching children that vegetables are bad for your teeth.
It is not only the fictitious Dr. Viermenhouk, however, who has a negative attitude toward the global brain. Related fears have been voiced by Peter Russell himself, who started a thread in the Global Brain Study Group on the striking topic: Superorganism: Sane or Insane. Russell says,
I first explored the notion of superorganisms in my book "The Global Brain" - - written back in the late seventies before the Internet really existed. There I showed that, from the perspective of general living systems theory, human society already displays 18 of the 19 characteristics of living organsims (the missing one is reproduction - we haven't yet colonised another planet, although we have the potential to).
The interesting question for me is not whether a global brain is developing. It clearly is. But will this growing global brain turn out to be sane or insane? If civilization continues with its current self- centred, materialistic worldview it will almost certainly bring its own destruction.
I have long been fascinated by the striking parallels between human society and cancer. Cancers have lost their relationship to the whole, and function at the expense of the organism - which is insane, since a successful cancer destroys its own host. This is what we appear to be doing, and very rapidly. Our embryonic global brain would seem to have turned malignant before it is even fully born.
I believe the reason for our collective malignancy comes back to individual consciousness. We are stuck in an out- dated mode of consciousness, one more appropriate to the survival needs of pre- industrial society. Thus the real challenge is for human consciousness to catch up with our technology. We need to evolve inwardly before any of our dreams of healthily-functioning global brains can manifest.
This is more intelligently and respectfully stated than Williams' parody, but in the end it is somewhat similar. Instead of fungus, we have cancer -- a far better metaphor, because cancer cells come from within, whereas fungus comes from outside. Russell believes that we are on a path toward the emergence of the global brain, and that the Web is just one particular manifestation of this path. But, observing that we humans ourselves are riddled with neurosis and interpersonal conflict, he wonders whether the collective intelligence that we give rise to is going to be any better off.
On the one hand, Russell believes that the global brain will go beyond individual human consciousness, with all its limitations. In response to a post of mine, questioning whether the Internet might eventually develop a sense of "self" similar to that of human beings, he responded as follows:
The question is whether this superorganism will develop its own consciousness - and sense of self - as human beings have done.
Back then [in The Global Brain] I argued that there were close parallels between the structure and development of the human brain, and the structure and development of the global telecommunication/information network, which suggested that when the global nervous system reached the same degree of complexity as the human nervous system, a new level of evolution might emerge. But it would be
wrong to characterise this new level as consciousness. It would be as far beyond consciousness, as we know it, as our consciousness is beyond life, as a simple organism knows it. So I don't think discussions as to whether the global social superorganism will develop a self akin to ours are that relevant.
Despite this conviction that the global brain will be far above and beyond human consciousness and human mental dynamics, however, he is worried that the flaws of individual human consciousness may somehow "poison" this greater emergent entity, and make it fatally flawed itself.
Responses to Russell's pessimistic post were mixed. Gregory Stock, for instance, took issue with Russell's generally negative judgement ofthe psychology of the average modern human. A biologist, Stock views human selfishness and shortsightedness as biologically natural, and believes that modern society and psychology, for all their problems, are ultimately wonderful things. His book MetaMan ( see http://www.cbs.com/mysteries/metaman.html) treats contemporary technological society as a kind of superorganism, and views this superorganism in a very positive light.
Turchin, on the other hand, agrees substantially with Russell's pessimistic view of human nature and its implications for the mental health of the superorganism. He believes, however, that it may be possible to cure human nature, at the same time as developing new technologies that extend human nature, overcoming its limitations:
>We need to evolve inwardly before any of our dreams of healthily functioning global
> brains can manifest
Yes. This is why the Principia Cybernetica project came into being. Our goal is to develop - - on the basis of the current state of affairs in science and technology - - a complete philosophy to serve as the verbal, conceptual part of a new consciousness.
My optimistic scenario is that a major calamity will happen to humanity as a result of the militant individualism; terrible enough to make drastic changes necessary, but, hopefully, still mild enough not to result in a total destruction. Then what we are trying to do will have a chance to become prevalent. But possible solutions must be carefully prepared.
More positive than Turchin or Russell, though less so than Stock, the physicist Gottfried Mayer-Kress expressed the view that, perhaps, the global brain itself might represent the solution to the problems of individual human consciousness, rather than merely transplanting these problems onto a different level:
Peter Russell writes:
>The interesting question for me is not whether a global brain is developing. It >clearly is. But will this growing global brain turn out to be sane or insane? If >civilization continues with its current self- centred, materialistic worldview it will >almost certainly bring its own destruction.
I thought a coherent world civilization was what we expect to emerge from a GlobalBrain.
In the context of complex adaptive systems we always would expect a "self- centred, materialistic worldview" of all subsystems (e.g. cells in the body, nation based civilizations etc.). Through the emergence of order parameters the subsystems begin to act in a coherent fashion and thereby increase the payoff for the subsystem.
A self-organized structure (order- parameter etc) will be stabilized if in the competition between the interests of the sub- system and those of the super- system if the subsystem recognizes that it is better off if it supports the larger structure (e.g. pay taxes etc). A neccessary condition for that to happen is that the coupling (communication) between the subsystems is strong enough that local modes cannot grow at the expense of the order parameter. In a social context that could mean that violations of the global rules/laws could be detected and suppressed.
For example, on a global scale it is still cheaper for most nations to choose to pollute the environment and waste energy. In a GlobalBrain world China would recognize that it is better not to introduce large scale individual transportation (cars) and Brazil would find it better for its own economy not to destroy the rainforest.
Regarding the "cancer" metaphor, Meyer-Kress observes that
even an "embryonic global brain" would be a coherent global structure and thereby directly contradict the basic definition of cancer. I would see the cancer analogy more the global spread of a drug culture.
Essentially, Mayer-Kress's point is as follows: saying that humans are "individualistic" is the same as saying that humans represent the "top level" of a hierarchy of systems. An individualistic system is just one that has more freedom than the systems within it, or the systems that it is contained in. Cells within individual organisms are individualistic only to a limited extent; they are behaving within the constraints of the organism. Cells that make up single-celled organisms, on the other hand, are far more individualistic: they have more freedom than the systems of which they are parts. More on Mayer-Kress's vision of a global brain may be found, along with his work on complexity science, at http://www.ccsr.uiuc.edu/People/gmk/Publications/.
The global brain, according to Mayer-Kress, is almost synonymous with the decrease of human individualism. We will still have individual freedom, but more and more it will be in the context of the constraints imposed by a greater organism. And so, in this view, Russell's idea that the global brain might inherit the problems caused by human "self-centredness" is self-contradictory. The global brain, once it emerges, will be the top-level system, and will be individualistic -- but, as Russell himself notes, the nature of its individualism will be quite "inhuman" in nature.
Mayer-Kress, in this post, did not address the question of whether the global brain would be sane or insane in itself; rather, he defused the question by breaking the chain of reasoning leading from human neurosis to global brain neurosis. In my own reply to Russell's message, on the other hand, I attempted to take the bull by the horns and answer the question: What would it even mean for a global Web brain to be insane?
About sanity or insanity. Surely, these are sociocultural rather than psychological concepts. However, they can be projected into the individual mind due to the multiplicity of the self. An insane person in a society is someone who does not "fit in" to the societal mindset, because their self- model and their reality- model differ too far from the consensus. In the same vein, if one accepts the multiplicity of the individual self (as in Rowan's book SUBPERSONALITIES), one finds that in many "insane" people, the different
parts of the personality do not "fit in" right with each other. So the jarring of world- models that characterizes the insane person in a culture is also present within the mind of the insane person. Because, of course, the self and mind are formed by mirroring the outside!
What does this mean for the global brain? View the global brain as a distributed system with many "subpersonalities." Then the question is not whether it is sane with respect to some outside culture, but whether it is sane WITH RESPECT TO ITSELF (a trickier thing to judge, no doubt). Do the different components
of the global brain network all deal with each other in a mutually understanding way, or are they "talking past" each other...
A key point to remember here is that the global brain can be, to a large extent, real- time engineered by humans and AI agents. So that, if any kind of "insanity" is detected, attempts can be made to repair it on the fly. We are not yet able to do this sort of thing with human brains, except in the very crudest way (drugs, removing tumors, etc.).
My belief, as crudely suggested in this post to the Global Brain Study Group, is that the sanity of the global Web brain is an engineering problem. By designing Web software intelligently, we can encourage the various parts of the global Web brain to interact with each other in an harmonious way -- the hallmark of true sanity. The various neuroses of human mind and culture will be in there -- but they will be subordinate to an higher level of sanely and smoothly self-organizing structure.
The biggest potential hang-up, in my view, is the possibility that forces in human society may intervene to prevent the software engineering of the Web mind from be done in an intelligent way. Perhaps it may come about that a maximally profitable Web mind and a maximally sane Web mind are two different things. In this case we will be caught in a complex feedback system. The saner the Web mind, the saner the global brain of humanity, and the less likely the forces of greed will be to take over the Web mind itself.
One thing is noteworthy about this particular thread on the Global Brain Study Group: in spite of our disagreements on details, everyone in the Study Group seems to concur that a healthy, sane global brain would be a good thing. Intuitively, on a personal level -- as reflected in the above discussion -- I fully agree with this consensus. I do believe that a global brain will be a good thing. I am incredibly excited about its coming, and I want to do all I can to speed its advent.
Taking a more rational, objective view, however, I am forced to admit that this optimistic attitude may obscure some very tough questions. An alternative view was given by Paulo Garrido, in a message on the Principia Cybernetica mailing list, forwarded by Heylighen to the Global Brain Study Group. Garrido made the following remarks:
IF human society is an organism (in the autopoietic sense) and has a (the super) brain THEN most probably we should KILL such a being.
Because, societies, or better, the social interaction should be a TOOL to enlarge individual power and freedom or, if one prefers, individual survival and development. There is no point in maintaining a society if it is not that. If a society becomes an organism, chances are that individual power and freedom are diminished: to exist as such an organism must limit the degrees of freedom of its components. And in the case of human societies - the components are us! Only one type of autopoietic system should be allowed to emerge as a result of social interactions: the one that enlarges individual power and freedom - for all the individuals. Maybe such a system is possible if it is built in the emotional domain of love, corresponding to the goal of development. If it is not the case, it should be destroyed. Otherwise, we may see ourselves with no survival or comfort problems ... and with no reason to live.
Garrido's remarks, though somewhat paranoid in tone, are well-thought-out and represent a natural human fear. Are we all going to be absorbed into some cosmic organism, to lose our human individuality, our freedom, our sense of individual feeling and accomplishment? After all, does computer technology not represent the ultimate in de-humanizing technology?
The difficulty, of course, is that freedom is difficult to assess. Every major change in the social order brings new freedoms and eliminates old ones. And the relative "goodness" of one thing or another is not an objective judgement anyway -- standards of morality vary drastically from culture to culture.
By way of comparison, it is interesting to ask the "for good or for ill" question of the computer itself. The answer, in this case as in the case of the global Web brain, is not 100% clear.
What, one might ask, has the computer contributed to economic productivity? It is generally assumed that computers have improved our efficiency, but there are no good figures in existence to prove this. In fact, the most literal reading of the economic figures tells you that computers have had a bad influence on productivity. True, economic measures are always suspect, and it is particularly difficult to measure productivity in the service sector, which is where computers have had the greatest impact. But it seems quite plausible to me that the economists are right, and that computers, rather than increasing the productivity of most businesses, have largely had the effect of replacing one kind of work with another, one kind of employee with another.
And then one might ask, what has been the computer's total contribution to culture and the quality of life? Most of us who use computers regularly will probably answer "Huge! Computers have improved our lives tremendously!" After all, how dull life would be without e-mail; how tedious writing was before word processors; how nice Doom and SimCity and Flight Simulator are on a rainy day; how useful is Excel for the small businessperson, Mathematica for the scientist, etc. etc. And there is no denying these positives, but even so, there are other ways in which the cultural influence of computers has been terribly negative. Computers are, in one view, the ultimate conclusion of a century-long trend toward the impersonalization of business transactions.
How many times have you heard someone remark that in the old days, there was a personal relationship between a business person and their customers? There was an element of caring there, and not merely "caring" in the economic sense of caring about retaining someone's business. Business transactions were human interactions. This is a cliche', but like many cliches it is very deeply true. Anthropologist Marvin Harris, in his book Why Nothing Works, coined the word "dis-service" to refer to the manifold inconveniences caused to modern Americans by computerized inventorying and billing systems, and other technological and organizational developments that divorce business transaction from genuine human interaction. He points out the tremendous number of hours wasted, and the huge amount of stressed caused, in trying to rectify the numerous errors and misunderstandings caused by the dehumanization of business.
I remember how much, in my childhood in the 1970's, I enjoyed going to the corner stores in the various towns we lived in. I always got to know the owner-operators quite well, and I liked going into the stores as much to chat with the owners as to buy candy or magazines or food. Sometimes I went there to buy nothing at all, just to kill a few minutes. Going back to the town now, I see that these stores are gone, replaced by franchise operations, which are operated by ignorant people who do not wish to chat and do not need to know anything about the store they work in, because everything is computerized. When I moved to Hamilton, New Zealand, in 1994, I was surprised to rediscover my childhood experience of shopping in corner stores, becoming quite friendly with the owner-operator of the deli down the street from my house. Hamilton was much like the USA had been 15 or 20 years earlier. I realized how much of the warm, pleasant, human quality of daily life had been killed by the automation, franchising and computerization of small retail businesses.
So, computers have given us e-mail, word processing and lots of cool games and useful tools for working. They have given us ATM machines, safer airplane flights, and cars with superior performance (though not necessarily lower repair bills, as anyone who has ever had to replace their car's "main computer" can attest). But, by contributing to the cultural trend toward depersonalization, they have also taken from us: they have taken a million little opportunities for genuine, rich, physical/mental human interaction. As with any other "advance," there has been a tradeoff.
I have chosen computers as an example because of their obvious relatedness to the global Web mind, but in fact, the same issues arise with any technological innovation, even civilization itself. Are we, one may ask, better off than our Stone Age predecessors? Some say yes, some say no. Some say that our ancestors worked only two hours a day, hunting and gathering, and spent the rest of the time enjoying each other and the world around them. No routine stress, no neurosis. There was genuine pain and suffering in times of cold or hunger or illness, true, but we civilized folk have not exactly eliminated these problems; and we have evolved our own specialized physical difficulties as well: AIDS, herpes, lung cancer,.... In fact, modern diseases did not spread significantly until sedentarism replaced nomadism as a standard style of life.
The interesting thing about the ambiguous value of technological innovations, however, is how little it seems to matter, in practical terms. Progress, it seems, can never be resisted; and once it has been made, it can never be permanently retracted. These are heuristic laws of cultural development, to which we have seen no major exceptions in human history so far. There is an ebb and flow to human affairs, but there is also, in the long term, a powerful overall movement toward greater social complexity and greater technological and intellectual sophistication.
No one, today, is going to go back to using a typewriter to write. In the US today, only a few old or poor people use typewriters. Few middle-class parents are going to let their children grow up without computers; and in another decade, nearly every household will have some sort of network computer in it, just as nearly every household today has a television. Most probably the computer and the television will become a single appliance.
And no one, today, is going to go back to living in the Stone Age manner. Modern technology is too seductive. It makes me sad to witness the collapse of the few remaining Stone Age cultures, in such places as central Africa, the Amazon jungle, Papua New Guinea and outback Australia. But I cannot in good faith tell the citified Aboriginals I see here in Western Australia: "No! Go back to the desert! Hunt and gather!" Because I know that I would do the same exact thing in their shoes. And why not?
The truth is that new technologies appeal to human nature. We like to have more, to see more, to do more. We like to extend our capabilities. Once we see the possibility of climbing up a little higher, we want to go there. We like to be more efficient and "cooler." And furthermore, as biologist Gregory Stock (a member of the Global Brain Study Group) has argued in MetaMan, this kind of attitude is not a quirk of our particular neurochemistry, it is a natural consequence of our intelligence. An intelligent organism is, by its very nature, constantly questing more, constantly striving to exceed itself. It is possible for intelligent organisms to get locked into relatively stable, steady-state systems, such as Aboriginal culture, which remained basically the same for around 50,000 years. Such a steady-state system channels the need for growth and expansion in specific directions, while restricting it from other directions. But even so, the intelligent mind is always striving out in all possible directions, and as soon as a new direction becomes apparent -- be it computers, civilization or the global brain -- the intelligent mind will seek it out.
Valentin Turchin, like many other systems theorists, speaks of an inevitable rise toward more and more complex forms of life. He applies this principle to the emergence of life from inorganic matter, to the emergence of human intelligence from life, and to the emergence of the global superorganism from humanity. What this philosophy is doing, I believe, is merely positing the universe itself as an intelligent system. Turchin is saying that the universe, like the human mind, cannot resist a new innovation, a better, more efficient way of doing things. It may glide along for a while in a steady state, but eventually the new idea will occur, and then it will be irresistable. The universe, like the mind, has an eye for intricate new patterns.
And so, I believe that the the global Web brain will be good in some regards, and it will be bad in some regards, but the one thing that I believe most strongly is that it will be irresistable. It will have its good points and its bad points, and it will also help to get rid of some of the bad points of the technologies that support it. For instance, the depersonalization of business interaction, brought on by the computer, will disappear in the wake of new kinds of computer-mediated human-to-human interaction. In the long run, the voices calling to kill the global brain will be no more prominent than the voices calling, right now, to kill civilization and return to the jungle. The heuristic law of progress, the universe's tendency to build up hierarchies of emergent patterns, is stronger than the human race itself.
In discussing the philosophy of the global Web mind and the global brain, one is always dancing around the edges of the notion of spirituality. In the end, one cannot ignore the fact that the notion of a global brain has strong religious overtones. There is something very deeply moving about the idea of an overarching mind that embraces individual human minds, melding them into a greater whole. In fact, when phrased in appropriate ways, the global societal mind begins to sound almost supernatural, like some kind of divine overarching being.
I am not a religiously-oriented person; I am an "a-theist" in the sense of not believing in any deity. In spite of my skepticism about gods, however, I believe that these spiritual overtones in the concept of the global brain are genuine and important, and should not be mocked or ignored. Whatever one's opinion of its ultimate meaning, spirituality is a part of the human experience, and will continue to be as we move into a new era of digital being.
First of all, where the Web and spirituality are concerned, it is impossible not to mention the work of mid-twentieth century theologian Teilhard de Chardin. Teilhard's evolutionary, information-theoretic spiritual philosophy has reminded many people of modern communications technology; so much so that some have cited his work as a premonition of the Internet.
Teilhard prophesied that our current phase of being, in which individual humans live independent lives, would eventually be replaced with something else -- something more collective and more spiritual; something focused on information and consciousness rather than material being. He coined the word noosphere or mind sphere to refer the globe-encircling web of thought and information that he thought would arise at the end of our current phase of being.
Teilhard was a Jesuit priest, and his ideas, for all their radicalism, emanate straight from the essence of Christianity. His vision is plainly an extension of the conventional Catholic notion of Judgment Day, a day on which history ends, and the angels descend from Heaven; the good are brought up to Heaven and the rest plunged down to hell. What Teilhard offers is a more refined Catholic eschatology, a subtler vision of the spiritual future, with a focus on information rather than on good versus evil. The subtlety of his vision was not appreciated by the fathers of the Church, who forbid Teilhard to publish and even exiled him to China; and so his major work, The Phenomenon of Man (see http://www.amazon.com/ exec/obidos/ISBN=006090495X/principiacyberneA/), was only published after his death.
"Man," according to Nietzsche, "is something that must be overcome." Nietzsche saw man as a stepping-stone between beast and Superman. Teilhard, on the other hand, saw man as a stepping-stone between beast and global Mind. The justification for humanity as it is, Teilhard declared, lies in what humanity is going to evolve into: a collective, electric mental organism, transcending the boundaries between individuals and the boundary between mind and body. A cosmic, intelligent reflective entity, transforming information within itself with perfect efficiency. Teilhard spoke of progres, of evolution, of an inexorable, natural movement from simple material forms toward more and more sophisticated, abstract forms -- from the mundane toward the spiritual. At the end of the line, he proposed, was the Omega point -- the emergence of a spiritually perfected global brain or noosphere.
What is the relation between the noosphere and the Net? Some would say the two are virtually identical. Jennifer Kreisberg, writing in Wired, put the case as follows (see Href="http://www.hotwired.com/ wired/3.06/departments/electrosphere/teilhard.html)"> http://www.hotwired.com/ wired/3.06/departments/electrosphere/teilhard.html)
Teilhard imagined a stage of evolution characterized by a complex membrane of information enveloping the globe and fueled by human consciousness. It sounds a little off the wall, until you thing about the Net, that vast electronic Web circling the Earth, running point to point through a nervelike constellation of wires....
Teilhard saw the Net coming more than half a century before it arrived. He believed this vast thinking membrane would eventually coalesce into "the living unity of a single tissue" containing our collective thoughts and experiences....
I suspect that this overstates the case -- but not by a great deal. It is more imprecise than fundamentally incorrect. In truth, although the metaphors that Teilhard conceived for talking about his noosphere do mesh well with the Web, the Internet itself does not fulfill Teilhard's prophecy, and nor will the emergence of a global Web brain. But when the global Web brain advances to Phase Two, and humans are incorporated into the globally distributed intelligence matrix -- then, at this point, Kreisberg's statement will be better justified, and one will have a digital system somewhat vaguely resembling a Teilhardian "mind sphere."
The first key point that Kreisberg glosses over is that Teilhard did not foresee that man would create a super-intelligent mind-sphere; he foresaw that man would become one. What is required in order to even approximately fulfill Teilhard's dream is, therefore, for humans to become part of the global brain, the intelligent Web. His vision more closely approximates a global societal mind than a global Web mind. It is important not to fudge the distinction between these two different things: Phase One, the emerging global Web mind; and Phase Two, the Russellian possibility of a human-incorporating global bio-digital intelligence.
Even a global societal mind, however, is a fair way off from Teilhard's idea of
The end of the world: the wholesale internal introversion upon itself of the noosphere, which has simultaneously reached the uttermost limit of its complexity and centrality.
The end of the world: the overthrow of equilibrium, detaching the mind, fulfilled at last, from its material matrix, so that it will henceforth rest with all its weight on God-Omega.
Phase Two of the global Web mind may indeed detach mind from its material matrix; and it may indeed represent a "phase transition," if not an "uttermost limit," in the complexity of the global network of human information. But anyone who believes all this will bring divine perfection is being foolish. New advances always bring problems along with solutions. The Net bears some resemblance to Teilhard's vision, but Teilhard's vision was of the mind sphere as a panacea, and that, one can be sure, the future Net will not be. At bottom, like all transcendental eschatologies, Teilhard's vision is a bit of a cop-out. By telling us perfection is just around the corner, it relieves us of the responsibility of seeing the perfection within the obvious imperfection all around us.
In the end, perhaps the most striking aspect of Teilhard's thought is the way it conjoins spirituality with information and communication. This, rather than his glittering portrayal of future utopia, is really what brings Teilhard so close to Internet technology. Many other theologians have written their own eschatologies, their own versions of Judgement Day. But Teilhard dispensed with the mythical symbols normally used to describe such events, and replaced them with very abstract, almost mathematical notions. In doing so, he roused the ire of the Catholic church, and he also -- quite unwittingly -- helped to bring spirituality into the computer age.
The other theological thinker who is crucial for understanding the Web -- though rarely if ever mentioned in this context -- is Carl Jung. The Web provides a whole new way of thinking about Jung's concept of the collective unconscious -- a realm of abstract mental forms, living outside space and time, accessible to all human beings, and guiding our thoughts and feelings. And it gives new, concrete meaning to his concept of archetypes -- particular mental forms living in the collective unconscious, with particular power to guide the formation of individual minds. (Jung's own writing style is famously impenetrable, but see the online journal Dynamical Psychology for a number of readable articles on Jung and archetypes: http://godel.psy.uwa.edu.au/dynapsyc/dynacon.html).
The concept of the collective unconscious has never achieved any status within scientific psychology: it is considered too flaky, too spiritual. Science, perhaps rightly, perhaps wrongly, has no place for an incorporeal realm of abstract forms, interacting with individual minds but standing beyond them. The global Web mind, however, will actually be an incorporeal -- or at least digital -- realm of abstract forms, interacting with individual minds but standing beyond them!
Some of the "archetypal forms" that Jung believed we absorb from the collective unconscious are basic psychological structures: the Self, the Anima/Animus (the female and male within), the Shadow. Others are more culture in nature, e.g. the First Man. Some are visual: e.g. the right-going spiral, signifying being "sucked in"; the left-going spiral, signifying being "spewed out." But the most basic archetypes of all, in Jung's view, are the numbers. Small integers like 1, 2, and 3, Jung interpreted as the psychological manifestation of order. In fact, Jung suggested that all other archetypes could be built up out of the particular archetypal forms corresponding to small integers. This is a strikingly computer-esque idea: it is a "digital" view of the world, in the strictest sense. And so we see that Jung's thought, for all its obscurity and spirituality, was at bottom very mathematical: he viewed the abstract structures of the mind as emanating from various combinations of numbers. He viewed the collective unconscious as a digital system.
The global Web mind will, I suggest, fulfill Jung's philosophy in a striking and unexpected way: it will be a digital collective unconscious for the human race. For after all, the memory of the global Web mind is the vast body of humanly-created Web pages, which is a fair representation of the overall field of human thought, knowledge and feeling. So, as the global Web mind surveys this information and recognizes subtle patterns in it, it will be determining the abstract structure of human knowledge -- i.e., determining the structure of the human cultural/psychological universe. This is true even for the global Web mind as an independent entity; and it will be far more true far more true if, as human beings integrate more and more with the Web, the global Web brain synergizes with humanity to form a global digital/societal mind.
Specifically, the most abstract levels of the global Web mind will bear the closest resemblance to the collective unconscious as Jung conceived it. These levels will be a pool of subtle, broad-based patterns, abstracte from a huge variety of different human ideas, feelings, and experiences, as presented on the Web. And this body of abstract information will be active. In Phase One, it will be involved in creating new links on the Web, in creating new Web content, in regulating various real-world and virtual activities based on this content. And ultimately, in Phase Two of the global Web mind, it will be involved in an interactive way with human thoughts and feelings themselves. In other words, precisely as Jung envisioned, the digital collective unconscious will be involved in forming the thoughts, feelings and activities of human beings' individual consciousnesses.
But what, in the end, are we to make of these parallels between Teilhard, Jung and the Net? Obviously, Carl Jung did not foresee the Internet and the global Web mind, any more than did Teilhard de Chardin. And neither did the engineers and scientists who designed the Internet have any intention of realizing the spiritual ideas of these philosophers.
The situation is well understood in the language of Jungian psychology: what has happened is that the philosophers and the engineers and scientists have tapped into the same emerging cultural archetype. The philosophers have expressed the human meaning of the archetype of the global interconnected web, and the archetype of active emergent pattern; the engineers and scientists, on the other hand, have made these archetypes physical and concrete. And we humans, as a race, are forever trapped between philosophy and engineering/science-- not a bad place to be trapped at all. We are on the very wonderful course of using engineering and science to fulfill our deepest philosophical, spiritual longings.
The Web of today may seem a long way off from these wild futuristic projections. I believe, however that the difference is no greater than the difference between a newborn baby and an adult, or the difference between a Model T Ford and a late-model Ferrari. What we are seeing today is not a perfected, stable system, but a sophisticated, incredibly intelligent system in the process of being born. And thus this is a time of incredible excitement and incredible opportunity.
In the case of a human child, the early years are known as the formative years. The input a child receives during its first few years will mark the child indelibly, having more effect by far than anything that happens later in life. One can only suspect that the same may be true of the incipient global Web mind. Right now we have an infant on our hands -- a delicate, vulnerable, tender infant. We have the opportunity to treat this infant well, to guide it carefully and wisely toward its ultimate condition of superior intelligence. Those of us involved in Web-oriented technical work must take our jobs very seriously -- we are not just engineers and scientists, we are nursemaids to a new form of life. And those of us who simply use the Web must understand its ceaseless changes and fluctuations for what they are: manifestations of the expansion and world-exploration of a living, growing organism.