Back to Ben Goertzel's Research Papers and Essays

The WorldWideBrain:

Using the WorldWideWeb to Implement Globally Distributed Cognition

Ben Goertzel

Copyright Ben Goertzel, 1996

rough draft, for comments only


Abstract

The concept of the WorldWideBrain (WWB) is a simple one: a massively parallel intelligence, consisting of structures and dynamics emergent from a community of intelligent WWW agents, distributed worldwide.

Here, different strategies for implementing and instantiating the WWB are discussed, and one possible design, called Design 1, is developed in some detail. Ideas from the author's previous work in complex-systems-theoretic psychology are used to develop Design 1, and to explore its likely properties.

The basic component of Design 1 is called the WWB unit. WWB units, it is argued, could be disseminated as tools for automatic link creation, thus serving the immediate goals of individual Web users while at the same time contributing to the emergence of the WWB.


Introduction

The evolution of the Internet up till now can be divided, roughly speaking, into three phases:

  1. Pre-WWW. Direct, immediate interchange of small bits of text, via e-mail and Usenet. Indirect, delayed interchange of large amounts of text, visual images and computer programs, via ftp.
  2. WWW. Direct, immediate interchange of images, sounds, and large amounts of text. Online publishing of articles, books, art and music. Interchange of computer programs, via ftp, is still delayed, indirect, and architecture-dependent.
  3. Active Web. Direct, immediate interchange of animations and computer programs as well as large texts, images and sounds. Enabled by languages such as Java, the Internet becomes a real-time software resource. "Knowbots" traverse the web carrying out specialized intelligent operations for their owners.

The third phase, which I call the Active Web phase, is still in a relatively early stage of development, driven, at this stage, by the dissemination and development of the particular programming language Java. However, there is an emerging consensus across the computer industry as to what the ultimate outcome of this phase will be. For many applications, people will be able to run small software "applets" from WWW pages, instead of running large, multipurpose programs based in their own computers' hard disk drives. The general-purpose search engines of the WWW phase will evolve into more specialized and intelligent individualized Web exploration agents -- "intelbots" or "knowbots." In short, the WWW will be transformed from a "global book" into a massively parallel, self-organizing software program of unprecented size and complexity.

But, exciting as the "Active Web" phase is, it should not be considered as the end-point of Web evolution. I believe it is important to look at least one step further. What comes after the Active Web, I propose, is the autopoietic, emergently structured and emergently intelligent Web -- or, in short, the WorldWideBrain. The Active Web is a community of programs, texts, images, sounds and intelligent agents, interacting and serving their own ends. The WorldWideBrain is what happens when the diverse population making up the Active Web locks into a global attractor, displaying emergent memory and thought-oriented structures and dynamics not programmed into any particular part of the Web.

Traditional ideas from psychology or computer science would seem to be of very little use for understanding and engineering a WorldWideBrain. However, ideas from complex-systems-theoretic psychology should be more relevant. In particular, in my past publications (Goertzel, 1993, 1993a, 1994, 1996) I have outlined an abstract theory of the structure and dynamics of intelligent systems which would seem to be entirely applicable to the WWB.

It may seem hasty to be talking about a fourth phase of Web evolution -- a WorldWideBrain -- when the third phase, the Active Web, has only just begun, and even the second phase, the Web itself, has not yet reached maturity. But if any one quality characterizes the rise of the WWW, it is rapidity. The WWW took only a few years to dominate the Internet; and Java, piggybacking on the spread of the Web, has spread more quickly than any programming language in history. Thus, it seems reasonable to expect the fourth phase of Internet evolution to come upon us rather rapidly, certainly within a matter of years.

Issues of Scale in Artificial Intelligence

Historically, the largest obstacle to progress in AI has always been scale. Put simply, our best computers are nowhere near as powerful as a chicken's brain, let alone a human brain. One is always implementing AI programs on computers that, in spite of special-purpose competencies, are overall far less computationally able than one really needs them to be. As a consequence, one is always presenting one's AI systems with problems that are far, far simpler than those confronting human beings in the course ordinary life. When an AI project succeeds, there is always the question of whether the methods used will "scale-up" to problems of more realistic scope. And when an AI project fails, there is always the question of whether it would have succeeded, if only implemented on a more realistic scale. In fact, one may argue on solid mathematical grounds that intelligent systems should be subject to "threshold effects," whereby processes that are inefficient in systems below a certain size threshold, become vastly more efficient once the size threshold is passed.

Some rough numerical estimates may be useful. The brain has somewhere in the vicinity of 100,000,000,000 -10,000,000,000,000 neurons, each one of which is itself a complex dynamical system. There is as yet no consensus on how much of the internal dynamics of the neuron is psychologically relevant. Accurate, real-time models of the single neuron are somewhat computationally intensive, requiring about the computational power of a low-end Unix workstation. On the other hand, a standard "formal neural-network" model of the neuron as a logic gate, or simple nonlinear recursion, is far less intensive. A typical workstation can simulate a network of hundreds of formal neurons, evolving at a reasonable rate.

Clearly, whatever the cognitive status of the internal processes of the neuron, no single computer that exists today can come anywhere near to emulating the computational power of the human brain. One can imagine building a tremendous supercomputer that would approximate this goal. However, recent history teaches that such efforts are plagued with problems. A simple example will illustrate this point. Suppose one sets out, in 1995, to build a massively parallel AI machine by wiring together 100,000 top-of-the-line chips. Suppose the process of design, construction, testing and debugging takes three years. Then, given the current rate of improvement of computer chip technology (speed doubles around every eighteen months), by the time one has finished building one's machine in 1998, its computational power will be the equivalent of only 25,000 top-of-the-line chips. By 2001, the figure will be down to around 6,500.

Instead of building a supercomputer that is guaranteed to be obsolete by the time it is constructed, it makes more sense to utilize an architecture which allows the continuous incorporation of technology improvements. One requires a highly flexible computer architecture, which allows continual upgrading of components, and relatively trouble-free incorporation of new components, which may be constructed according to entirely new designs. Such an architecture may seem too much to ask for, but the fact is that it already exists, at least in potential form. The WWW has the potential to transform the world's collective computer power into a massive, distributed AI supercomputer.

Once one steps beyond the single-machine, single-program paradigm, and views the whole WWW as a network of applets, able to be interconnected in various ways, it becomes clear that, in fact, the WWW itself is an outstanding AI supercomputer. Each Web page, equipped with Java code or something similar, is potentially a "neuron" in a world-wide brain. Each link between one Web page and another is potentially a "synaptic link" between two neurons. The neuron-and-synapse metaphor need not be taken too literally; a more appropriate metaphor for the role of a Web page in the WWW might be the neuronal group (Edelman, 1988). But the point is that Java, in principle, opens up the possibility for the WWW to act as a dynamic, distributed cognitive system. The Web presents an unprecedentedly powerful environment for the construction of large-scale intelligent systems. As the Web expands, it will allow us to implement more and more intelligent WorldWideBrain, leading quite plausibly, in the not too far future, to a global AI brain exceeding the human brain in raw computational power.

Strategies for Instantiation and Implementation

The creation of the WorldWideBrain requires a number of important decisions. Most importantly, there is the question of how to implement the WWB: how to turn the vague idea of the WWB into a concrete collection of algorithms. And then there is the question of how to instantiate the WWB: how to actually get the algorithms adopted throughout the WorldWideWeb.

Regarding instantiation, first of all, there are two natural models:

  1. Distributed ownership
  2. Proprietary intranet-based
The distributed ownership model is probably the more reasonable of the two. Given that the WWW itself is not owned by any single individual, corporation or institution, it would in a sense be simplest to develop the WorldWideBrain in a similar way. This model of instantiation, however, has profound implications for the way in which the WWB should be developed. Under the distributed ownership model, it becomes imperative that the WWB is developed in such a way that individuals and organizations can see some benefit from incorporating their own Web pages into the WorldWideBrain.

The intranet-based model also holds some promise, for different reasons. Many universities and corporations are developing "intranets" -- WWW-like networks with access restricted to insiders. Online services like America Online and Compuserve provide proprietary information networks that are similar to intranets, although at present they do not use HTML and Java based technology. Intranet-based AI systems would have the benefit of not needing to show benefits on the level of individual Web sites. It would be enough of the organization owning the intranet gained some benefit from the Intranet brain as a whole.

Of course, the intranet-based model sacrifices the large scale which is one of the motivating factors behind the concept of the WorldWideBrain. However, a large organization might still be able to construct a moderately powerful Web-based AI system. Initial experiments with the WWB will likely take place on systems of this nature.

Combinations of the two models are also quite feasible, and may ultimately be the best path. One can envision an heterogeneously structured WorldWideBrain, with individual, self-justifying agents resident in individual Web sites, and larger integral components resident in intranets. Here intranets are being considered as relatively coherent subsets of the WWW, rather than islands unto themselves: it is assumed that intranets are able to share certain information with the main body of the WWW.

Next, just as there are two natural methods of instantiating the WWB, there are also two natural methods for implementing it. The possible implementations of the WWB lie along a continuum, between two extremes, which I call the content-independent and content-driven models.

The idea of the content-independent model is to use the WWW as a medium for a network of intelligent agents, which in themselves have nothing to do with the contents of current WWW pages. Web pages, in addition to their directly human-relevant information, would contain information useful only in the context of the WorldWideBrain.

In the content-driven model, on the other hand, the WorldWideBrain would exist as an augmentation of the existing WorldWideWeb. The intelligent agents making up the WorldWideBrain would correspond naturally to the contents of individual Web pages, or groups of Web pages.

Real implementations may lie between one extreme and the other. One might have a WorldWideBrain that was intricately tied in with the contents of WWW pages, but also carried out some distributed computations not directly tied to any particular page content.

The two issues of instantiation and implementation are not independent. The distributed-ownership model of instantiation matches up naturally with content-driven implementation. And, by the same token, the intranet-based model of instantiation makes content-independent implementation seem much more promising.

Design for a WorldWideBrain

For the purpose of discussing and developing concrete algorithms, one must assume specific instantiation and implementation models. The design that I will describe here, and call Design 1, is based mainly on the distributed-ownhership, content-driven model. However, it may also be considered as a mixed, multilayered model, with a lower layer based on the distributed-ownership, content-driven model, and an upper layer that is less content-driven and that might be implemented in either a distributed-ownership or intranet-based fashion.

First some new terminology will be required. I will use the term Web chunk to refer generically to a Web page, or collection of Web pages; and I will call the collection of all Web pages directly linked to a Web chunk W the neighborhood of W, or N(W). Similarly, the k-neighborhood of a Web chunk W, Nk(W), is the collection of Web pages that can be reached within k links or less from W.

Next, the term WWB unit will refer to a minimal intelligent agent within the WWB, consisting of:

  1. a store, consisting of a collection of humanly-authored Web pages, the explicit store, and a machine-authored Web pages, the pattern store.
  2. a WWB agent, a program (written in a Web-friendly language such as Java) carrying out operations of pattern recognition and link creation.
The central concept in Design 1 is pattern recognition. A WWB agent must recognize patterns in Web chunks, not "a priori," but rather conditionally, with respect to the information that it contains in its store. WWB agents store the patterns that they recognize in their pattern stores. The types of patterns to be recognized are limited only by the array of available pattern-recognition algorithms. Initially, given a Web consisting primarily of textual information, the natural course is to focus on recognition of semantic patterns in text.

Consider a WWB unit S. The pattern store of S consists of ordered pairs of the form

(W,P(W))

where Where V and W are two Web chunks, and V#W is the combination of the two, it should be noted that the patterns found in V#W may include patterns not found in either V or W, considered separately. I.e., P(V#W|S) may be a considerably larger set than the union of P(V|S) and P(W|S).

The second function of WWB agents, the formation of new links, comes directly out of the information contained in the pattern store. The basic principle is that similar Web pages should be linked, where a WWB agent judges similarity in terms of similarity of semantic patterns. First-order similarity between two pages consists of the similarity between P(V|S) and P(W|S). The k'th-order similarity is the similarity between P(Nk(V)|S) and P(Nk(W)|S).

Note that the links created in this way are naturally weighted. Some links reflect very intense similarities, while some reflect only mild similarities, and there is no reason why these difference in similarity should not be communicated by a numerical index of similarity. Neighborhoods, in this context, may naturally be considered as fuzzy sets, where the extent to which V belongs to N(W) is the weight of the link from W to V. Weights on links may be used to good effect in the computation of higher-order similarity.

The Psynet Model and the WorldWideBrain

The WWB unit, as described above, is a suitable basic unit for a WWB -- analogous less to a neuron, than, perhaps, a neuronal group. However, an intelligent system is more than just a chaotic network of interacting intelligent agents. It must have an emergent structure and dynamics. To understand what kind of emergent structure and dynamics one might expect from a WWB, I will now introduce a few simple ideas from complex-systems-theoretic psychology, as developed in my previous work (Goertzel, 1993, 1993a, 1994, 1996).

In my past work, I have developed an abstract model of intelligent systems that I now call the psynet model. The psynet model can be expressed using advanced mathematics but, at bottom, it is very simple indeed. And this simplicity is, on careful consideration, only to be expected. It would be absurd to think that the mind could boil down to some abstruse mathematical property of some sophisticated equation....

The first basic principle of the psynet model is that the mind is made of pattern/processes. "Pattern/processes" is an ugly term, but there is no single word in English to denote a process which exists to recognize patterns, and itself contains (or perhaps is) this pattern which it recognizes. In computer science terms, a pattern/process is a kind of agent. I have also called them "magicians," because of their ability to transform each other. Pattern/processes act on each other to produce new pattern/processes; and, indirectly, they can also lead to one another's destruction. In Design 1, pattern/process magicians are implemented as WWB units. The crucial role of pattern recognition in Design 1 was motivated in part by the psynet model.

The secpmd principle of the psynet model is a basic dynamic of mind. This is a very, very simple dynamic. First of all: the magicians act on each other. Secondly: the magicians quickly perish. Each magician acts, alone and in conjunction with those other magicians that are close to it, and then it shortly after disappears. In acting it produces new magicians, which proceed to act in their own right. They also mutate, and mutation errors propagate, making the whole system incredibly unpredictable.

This may sound like a chaos, but it is not, or at least it doesn't have to be. The reason is the phenomenon of autopoiesis (Varela, 1978). Systems of magicians can interproduce. A can produce B, while B produces A. A and B can combine to produce C, while B and C combine to produce A, and A and C combine to produce B. The number of possible systems of this sort is truly incomprehensible. But the point is that, if a system of magicians is mutually interproducing in this way, then it is likely to survive the continual flux of magician interaction dynamics. Even though each magician will quickly perish, it will just as quickly be re-created by its co-conspirators. Autopoiesis creates self-perpetuating order amidst flux.

Some autopoietic systems of magicians might be unstable; they might fall apart as soon as some external magicians start to interfere with them. But others will be robust; they will survive in spite of external perturbations. These robust magician systems are what called autopoietic subsystems. The third basic principle of the psynet model is that thoughts, feelings and beliefs are autopoietic subsystems. They are stable systems of interproducing pattern/processes.

In WWB terms, an autopoietic subsystem is a densely interlinked collection of Web units, each of which contains links to other elements of the collection, motivated by semantic similarity. If any one element of the system were deleted, by having its links within the system removed it would be very likely be recruited for the system again, by the natural activities of the Web units. One would expect that groups of Web units dealing with the same topic would become autopoietic subsystems, but this is only the simplest example.

Finally, the fourth basic principle of the psynet model is that the small autopoietic subsystems in a mind spontaneously self-organize into a meta-attractor which I call the dual network.

The dual network, as its name suggests, is a network of pattern/processes that is simultaneously structured in two ways. The first kind of structure is hierarchical: simple structures build up to form more complex structures, which build up to form yet more complex structures, and so forth. The second kind of structure is heterarchical: different structures connect to those other structures which are related to them by a sufficient number of pattern/processes.

These two structures are seen everywhere in neuroscience and psychology. The whole vast theory of visual perception is a study in hierarchy: in how line processing structures build up to yield shape processing structures which build up to yield scene processing structures, and so forth. The same is true of the study of motor control: a general idea of throwing a ball translates into specific plans of motion for different body parts, which translates into detailed commands for individual muscles. It seems quite clear that there is a perceptual/motor hierarchy in action in the human brain. And those people concerned artificial intelligence and robotics have not found any other way to structure their perceiving and moving systems: they also use, by and large, perceptual-motor hierarchies.

On the other hand, the heterarchical structure is seen most vividly in the study of memory. It is obvious that memory is associative. The various associative links between items stored in memory form a kind of sprawling network. The kinds of associations involved are extremely various, but can all be boiled down to pattern: if two things are associated in the memory then there is some other mental process which sees a pattern connecting them.

Clearly, the Web as it exists now attempts to be an heterarchical, associative network. The addition of intelligent, link-forming WWB units as proposed in Design 1 would certainly improve the Web in this regard, introducing links based on genuine similarity rather than the personal knowledge and quirks of Web authors.

Hierarchical structure in the Web is provided a very crude and static way by "tree" search utilities such as Yahoo. But these trees are a "dead" hierarchical structure rather than a living, active, intelligent one. A truly refined, adaptive hierarchical structure is lacking in the Web at the present time. What the psynet model suggests is that, until this gap is filled, the associative structure itself will remain messy and imperfect.

For, in the end, the heterarchical and hierarchical networks are not separate things, they are just one network of magicians, one network of autopoietic attractors.

The alignment of the hierarchical and heterarchical networks has some interesting consequences. For instance, one can show that, if the heterarchical network is going to match up with the hierarchical network, it will have to be in some sense fractally structured: it will have to consist of clusters within clusters within clusters..., each cluster corresponding to a higher level of the hierarchical network. And one can look at the way the networks reorganize themselves to improve their performance and stability. Every time the heterarchical network reorganizes itself to keep itself associative in the face of new information, the "programs" stored in the hierarchical network are in effect crossed over and mutated with each other, in a manner similar to the genetic algorithm, so that the whole dual network is carrying out a kind of evolution by natural selection.

What the psynet model suggests is that, in order for the Web to become a truly intelligent system -- a WWB -- two things must be done, in addition to the introduction of intelligent Web units:

  1. Encourage the formation of autopoietic subsystems

  2. Encourage the emergence of the dual network
How much "encouraging" is necessary here is not really clear, at present. In the case of the human brain, it is unknown how much mental structure is obtained from genetics, how much is taught, and how much spontaneously emerges -- so we certainly do not have these answers in the case of the WWB. However, it should be noted that the "encouraging" here may take place on two different levels. First, we may make explicit efforts to engineer autopoietic subsystems and hierarchical structures in the Web. Second, we may take care in the programming of Web agents to encourage the emergence of these entities specified by the psynet model.

Instantiation of Design 1

Finally, let us return to the question of instantiation. Supposing that Design 1 is a viable one -- how might one go about actually getting it put into place on the WWW. This might seem an insuperable obstacle, given the distributed-ownership nature of the Web. But in fact, a natural strategy suggests itself.

One of the most elegant features of Design 1 is that its basic component, the WWB unit, offers humanly-meaningful functionality on the level of individual Web pages. The new links created by a WWB unit are useful for WWB operation, but they may also be useful for human readers of the WWB unit's explicit store. What is required to facilitate instantiation of the WWB according to Design 1 is an implementation of the WWB unit in which the links stored in the pattern store are placed into the explicit store in a way that is useful to human users. The WWB unit then becomes, not only a node in a global intelligent system, but a utility for the automatic creation of WWW links. This is a key point because, while few people may be interested in installing a program that helps the creation of a globally emergent WWB, many people will be interested in installing a program that automatically supplies their Web pages with new, relevant links.

Implementing the WWB unit as a link creation utility would seem to be an excellent way to get the WWB project started. However, the psynet model suggests that, in the long run, WWB units devoted to particular, humanly-useful Web sites may not be enough. One wants to have higher-level WWB units, which recognize patterns in a broad range of WWB units, and may not be tied to any particular humanly-comprehensible contents. These units will have to be maintained by organizations acting "altruistically," in the sense that these higher-level WWB units benefit the WWB as a whole more than any individual Web information provider. It is anticipated that, once the initial network of WWB units has begun to show emergent behavior and global utility, the value of higher-order nodes should become apparent, and there should be no difficulty finding funding for these Web units.