Wild Computing -- copyright Ben Goertzel, © 1999

Back to Wild Computing Table of Contents

Chapter 3:
The Psynet Model of Mind

"The Law of Mind is one, and but one..."
– Charles S. Peirce

1. What is the Psynet Model?

The psynet model is a conceptual model of the mind, created not only for AI purposes, but for the analysis of human thought processes as well (see Goertzel, 1993, 1993a, 1994, 1997 for detailed expositions of various aspects of the model, as it developed over time). It aims to capture the abstract structures and dynamics of intelligence, under the hypothesis that these are independent of the underlying physical implementation. The model can be cast in mathematical language in various ways, and validated as an empirical scientific theory in various respects; but these perspectives will not be presented here. The goal here is simply to get across enough of the model to provide a conceptual grounding for various ideas in the chapters to follow. We will begin with a highly general, almost philosophical treatment of the psynet model, then show how a particular realization of this model leads to a computational model of the mind in terms of nodes, links and agents -- the Webmind architecture, to be outlined in more detail in Chapter 8.

The essential ideas of the psynet model are simple. A capsule summary is:

The use of the word "magician" here is perhaps worth comment, and interrelation with conventional computer science terminology. Gul Agha, in his book Actors: A Model of Concurrent Computation in Distributed Systems, defines an actor as follows:


Actors are computational agents which map each incoming communication into a 3-tuple consisting of:

  1. a finite set of communications sent to other actorsa computational agent that can communicate with other actors,

  2. a new behavior (which will govern the response to the next communication processed); and,

  3. a finite set of new actors created



Magicians are more general than actors, in that they are not restricted to digital computers: in a quantum computing context, for example, magicians could be stochastic, and hence send an infinite set of possible communications determined by quantum-level randomness, violating Agha's first condition. Since the psynet model of mind is intended to apply to human brains as well as digital computers, and since human brains may well be quantum systems, the concept of "actor" is not adequate for the general psynet model. However, it is nevertheless true that, in a digital-computing context, a magician is a special kind of actor. In particular, it is an actor whose behaviors explicitly include

The psynet model therefore coincides well with the modern tradition of distributed computing. What the psynet model does, which nothing else in contemporary computing or cognitive science does, is to give detailed plan for how a large community of computational agents should be set up in order that the community should, as a collective, evolve highly intelligent behavior. The right mix of agents is required, as well as the right kind of "operating system" for mediating agent interactions.

The Webmind AI system provides a general "agents operating system" for managing systems of software magicians that share meaning amongst each other, transform each other, and interact in various ways; the magicians may live in the RAM of a single machine, may be run on multiple processors, and may live across many machines connected by high-bandwidth cable. It provides mechanisms for magicians to represent patterns they have recognized in other magicians, and patterns they have recognized in the overall magician system of Webmind. And it also provides an appropriate assemblage of magicians, specialized for such things as

These magicians have their own localized behaviors but achieve their true intelligence only in the context of the whole magician system that is Webmind. This article describes the psynet model in general in a clear and concise way, and then explains how the particular architecture of the Webmind system serves to realize this model in an effective way on modern computer networks.

2. The Psynet Model of Mind in 37 Easy Lessons

There are many ways to construct the psynet model. This section will give a construction that is centered around the concept of "meaning" -- an approach that is particularly useful in a Webmind context, because the goal of Webmind as a product is provide meaningful answers to human questions. The model will will be presented here as a series of 37 Observations about the nature of mind. No attempt, in this exposition, will be made to determine the extent to which the observations are axiomatic as opposed to derived from other observations, assumptions, etc. These are interesting questions, but beyond the scope of a document whose main focus is the use of the Psynet model and its general implications for Internet AI. In Chapter 8, we will return to these 37 questions and specifically indicate how each one is realized in the Webmind architecture.

Observation 1. Mind is a set of patterns, where a pattern is defined as "a representation as something simpler".

To understand what a "pattern" is one must understand what "representation" and "simpler" are. In order to define simplicity one requires a "valuation" -- a way of assigning values to entities. In mathematical terms, a valuation is a mapping from some set of entities into some partially ordered domain, e.g. into the domain of numbers. A representation, on the other hand, is an entity that "stands for" another entity. To talk about representation one must have three entities in mind: the entity A being represented, the entity B doing the representing, and the entity C that recognizes B as a representation of A. The recognition activity of C is a kind of transformation; we may write C(B)=A. Thus, putting simplicity and representation, together, the conclusion is

Observation 2: To be able to have patterns, one must have entities that can transform entities into numbers (or some other ordered domain, to give simplicity judgements), and one must have entities that transform entities into other entities (so as to enable representation)

The space of entities in which patterns exist must be a space of entities that can be considered as transformations, mapping entities into other entities. The optimal name for entities of this sort is not clear; in some past writings on the psynet model these entities have been called "magicians," in others they have been called "agents" or "actors." Here we will stick with the term "magician", which is a whimsical terminology intended to evoke images of intertransformation: each magician, each entity in the mind, has the ability to transform other magicians by a variety of methods ("magic spells"). The mind is envisionable as a community of magicians constantly magicking each other into different magicianly forms.

Observation 3: Mind is intrinsically dynamical. The transformation C(B)=A, in the definition of pattern, implies a "change": C changes B into A

Observation 4: Magicians can be combined in a way other than transformation; they may be combined in space. The result of joining A and B in space may be denoted A#B.

Observation 5: Spatial combination gives rise to the possibility of emergent pattern: patterns that are there in A#B but not in A or B individually.

Observation 6: The meaning of an entity may be defined as the set of all patterns associated with that entity -- where a pattern P may be associated with an entity A in several ways: P may be a pattern in A, P may be an emergent pattern in the combination A # B. or P may close to A in spacetime (P and A may have occurred in about the same place at about the same time)

Observation 7: Pattern and meaning are subjective, in that they depend on who is measuring simplicity, and who is defining the set of permissible transformations

Observation 8: Meaning is dynamic as well as static. The patterns in an entity include patterns in how that entity acts, and interacts, over time.

Observation 9: In any given mind, at any given time some magicians are given more attention than others. Attention means that a magician is allowed to carry out transformations on other magicians.

Observation 10: A mind always possesses some degree of randomness (i.e., dynamics which has no patterns that it can detect). Randomness occurs wherever attention occurs.

Observation 11: The basic law of mind dynamics is: A magician passes some of its attention to other magicians with whom it shares meaning

This dynamical law has a long history in philosophy; it was most clearly enunciated by the American philosopher Charles S. Peirce toward the end of the last century. The relation to neural network dynamics is clear and will be elaborated below: in a neural net, a neuron passes some of its activation to other neurons that it is connected to. If one thinks of a magician as a neuronal module, and posits that two modules share meaning if they are strongly interconnected, then Observation 11 fits in perfectly with neural net dynamics. However, it is important from an AI point of view that we adopt a certain dynamic because it has a psychological importance rather than because it loosely models some aspect of the human brain.

Observation 12: Sharing of meaning may take many forms. Primally, meaning sharing may be of three different kinds:

symmetric

asymmetric

emergent.

The first three types of meaning sharing are all "structural" in nature, and may be formalized as follows. Denote the meaning of magician A by the fuzzy set m(A) consisting of patterns in A or emergent between Aand other entities. Symmetric meaning sharing is gauged by the formula

[ m(A) intersect m(B) ] / [m(A) union m( B)]

Asymmetric meaning sharing is given by

m(A)/[ m(A) union m(B) ]

Emergent meaning sharing is given by

[m(A#B) - m(A) - m(B)] / [m(A) union m(B)]

Meaning sharing incorporates temporal and spatial reality to the extent that the meaning m(A) of A includes entities that occurred close to A in spacetime.

The above observations pertain to mind but do not directly address the concept of "intelligence." Intelligence, however, can be approached in a similar way:

Observation 13: Intelligence may be defined as the ability to achieve complex goals in complex environments

Observation 14: The complexity of an entity may be defined as the total amount of pattern in that entity, or equivalently, the amount of meaning in the entity. Thus, intelligence is the ability to achieve meaningful goals in meaningful environments.

To compute the amount of meaning means to take all X in the meaning m(A) of and entity A, and add up the degree of membership of X in this fuzzy set. The catch is that one must take not an ordinary sum but a "non-overlapping sum," not counting twice two patterns that express essentially the same thing. The correct formulation of this non-overlapping sum is a problem in algorithmic information theory.

Note that complexity is defined in terms of pattern, which is defined in terms of simplicity. In this sense this definition of complexity is not entirely satisfactory, from a philosophical view. However, in any formal system, one must take some things as basic and undefined. I have chosen to take simplicity and transformation as basic quantities, and derive others, such as pattern, complexity and intelligence, from these.

This is a subjective rather than objective definition of intelligence, in that it relies on the subjective identification of what is and is not a pattern. If dolphins are bad at solving goals that we think are meaningful, and operating in environments that we think are meaning-laden, this means that they are not intelligent with respect to our own subjective simplicity measures, but they may be highly intelligent with respect to some other simplicity measure, e.g. their own. The upshot is that this definition of intelligence is pragmatically valuable only in comparing different entities of like kind -- i.e., different entities sharing largely the same goals, and comfortable in largely the same environments.

These definitions lead to the following observation:

Observation 15: In order to achieve complex goals in complex environments -- i.e., to be intelligent -- a complex mind is required

This is important from an engineering perspective, because it tells us that the dream of a mind in 100 lines of code is unachievable. This point comes up again in the discussion of specialization among magicians, below. It is also important in pointing out problems that can occur with complex systems engineering -- which only reflexive intelligence can solve:

Observation 16: A complex mind, implemented in a physical medium, will require continual modification of its internal parameters to assure steady intelligent functioning. This modification must be done intelligently in some cases, and so there must exist certain magicians with a special feedback relation to the physical medium determining the parameters of mental action.

A complex environment is one with a lot of patterns; in order to recognize a complex web of patterns in an environment, however, a long & deep exposure to this environment is required. This tells us that an intelligent system is necessarily hooked up to a "rich" environment via perceptual sensors, "rich" meaning rich in pattern. Furthermore it must be able to proactively search for pattern:

Observation 17: Pattern recognition in a complex environment is best done by a combination of perception, cognition (internal transformation of perceptions), and action

Observation 18: A substantial amount of a mind's attention must often be allocated to recognizing pattern in its environment, i.e. to this threefold "perceptual/cognitive/active loop."

A mere collection of patterns recognized in an environment, however, is never going to be a very intelligent mind. Mind is characterized by certain universal, "archetypal" structures.

Observation 19: A "magician system" is a collection of magicians that is self-producing, in the sense that any magician in the system can be produced by the combination of some other magicians in the system. Minds are magician systems, at least to within a high degree of approximation.

This is similar to the idea that minds are autopoietic systems, in the sense of Maturana and Varela.

A terminological question arises here: When do we want to call a collection of patterns a mind? Is every collection of patterns a mind, or is intelligence required? Does a mind have to be a magician system, or not? These are not very important questions, in that they just pertain to the definitions of words. A sound practice is to refer to a mind as the set of patterns in an intelligent system. Since the definition of intelligence is fuzzy, the definition of mind is fuzzy as well, and the conclusion is that everything is mind-ful, but some things are more mind-ful than others.

Observation 20: Highly intelligent minds are characterized by hierarchical structures. The definition of hierarchy in this context is: A relates to {B1, B2,...,Bk} hierarchically if each of the B[i] asymmetrically shares much meaning with A. The process of creating hierarchical structure is called "clustering" or "categorization."

Observation 21: Highly intelligent minds are characterized by heterarchical structures, large connected networks of symmetrically meaning sharing entities

Observation 22: In a highly intelligent system, the hierarchical and heterarchical structures of mind are aligned, so that in many cases, when A relates to {B[1],...,B[k]} hierarchically, each B[i] relates to a number of other B[i] symmetrically

This alignment of hierarchy and heterarchy has sometimes been called the "dual network" of mind.

Observation 23: Minds are finite, so that if they live long enough, they must forget. They will run into situations where they lose the B involved in a representation C(A)=B, but retain the pattern A that was recognized.

Forgetting has profound consequences for mind. It means that, for example, a mind can retain the datum that birds fly, without retaining much of the specific evidence that led it to this conclusion. The generalization "birds fly" is a pattern A in a large collection of observations B is retained, but the observations B are not.

Observation 24: A mind's intelligence will be enhanced if it forgets strategically, i.e., forgets those items which are the least intense patterns

Observation 25: A system which is creating new magicians, and then forgetting magicians based on relative uselessness, is evolving by natural selection. This evolution is the creative force opposing the conservative force of self-production.

Observation 26: A pattern A is "grounded" to the extent that the mind contains entities in which A is in fact a pattern

For instance, the pattern "birds fly" is grounded to the extent that the mind contains specific memories of birds flying. Few concepts are completely grounded in the mind, because of the need for drastic forgetting of particular experiences.

Observation 27: "Reason" is a system of transformations specialized for producing incompletely grounded patterns from incompletely grounded patterns.

Consider, for example, the reasoning "Birds fly, flying objects can fall, so birds can fall." Given extremely complete groundings for the observations "birds fly" and "flying objects can fall", the reasoning would be unnecessary -- because the mind would contain specific instances of birds falling, and could therefore get to the conclusion "birds can fall" directly without going through two ancillary observations. But, if specific memories of birds falling do not exist in the mind, because they have been forgotten or because they have never been observed in the mind's incomplete experience, then reasoning must be relied upon to yield the conclusion.

So far this is a highly general theory of the nature of mind. Large aspects of the human mind, however, are not general at all, and deal only with specific things such as recognizing visual forms, moving arms, etc. This is not a peculiarity of humans but a general feature of intelligence.

Observation 28: The specialization of a transformation may be defined as the variety of possible entities that it can act on. The magicians in a mind will have a spectrum of degrees of specialization, frequently with more specialized magicians residing lower in the hierarchy.

The necessity for forgetting is particularly intense at the lower levels of the system. In particular, most of the patterns picked up by the perceptual-cognitive-active loop are of ephemeral interest only and are not worthy of long-term retention in a resource-bounded system. The fact that most of the information coming into the system is going to be quickly discarded, however, means that the emergent information contained in perceptual input should be mined as rapidly as possible, which gives rise to the phenomenon of "short-term memory."

Observation 29: A mind must contain magicians specialized for mining emergent information recently obtained via perception. This is "short term memory." It must be strictly bounded in size to avoid combinatorial explosion; the number of combinations (possible grounds for emergence) of N items being exponential in N.

The short-term memory is a space within the mind devoted to looking at a small set of things from every possible angle. The bound of short-term-memory size in humans and animals seems to be 7+/-2. For combinatorial reasons, it seems likely that any physical system of similar scale will have a similar bound.

Observation 30: The short-term memory may be used for tasks other than perceptual processing, wherever concentrated attention on all possible views of a small number of things is required

One of the things that magicians specialize for is communication. Linguistic communication is carried out by stringing together symbols over time. It is hierarchically based in that the symbols are grouped into categories, and many of the properties of language may be understood by studying these categories.

Observation 31: Syntax is a collection of categories, and "syntactic transformations" mapping sequences of categories into categories. Parsing is the repeated application of syntactic transformations; language production is the reverse process, in which categories are progressively expanded into sequences of categories.

Observation 32: Semantics is a collection of categories, and "semantic transformations" mapping: categories into categories, category elements into category elements, transformations into categories, and semantic transformations into semantic transformations.

Observation 33: A key psychological role of syntax is to transfer semantic knowledge from strongly grounded patterns to weakly grounded or entirely ungrounded patterns.

Observation 34: Language is useful for producing magicians specialized for social interaction. Syntax in particular is crucial for social interaction, because another intelligence's observations are in general ungrounded in one's own experience.

Language is for communication with others, and is tied up with sociality; but the structures used in language are also essential for purely internal purposes.

Observation 35: The most intelligent minds have selves, where a "self" S is a pattern which a mind recognizes in the world, with the property that, according to the mind's reasoning, the substitution of S for the mind itself would produce few changes. I.e., the self asymmetrically shares meaning with the entire mind.

Observation 36: The "self" of a mind is a poorly grounded pattern in the mind's own past. In order to have a nontrivial self, a mind must possess, not only the capacity for reasoning, but a sophisticated reasoning-based tool (such as syntax) for transferring knowledge from strongly grounded to poorly grounded domains.

Observation 37: The existence of a society of similar minds makes the learning of self vastly easier

The self is useful for guiding the perceptual-cognitive-active information-gathering loop in productive directions. Knowing its own holistic strengths and weaknesses, a mind can do better at recognizing patterns and using these to achieve goals. The presence of other similar beings is of inestimable use in recognizing the self -- one models one's self on a combination of: what one perceives internally, the effects of oneself that one sees in the environment, and the structures one perceives in other similar beings. It would be possible to have self without society, but society makes it vastly easier, by leading to syntax with its facility at mapping grounded domains into ungrounded domains, and by providing an analogue for inference of the self.

3. Psynet AI

The psynet model of mind, as developed above, is clearly far too abstract to lead directly to any particular software program or engineering hardware design.. It can be implemented in many different ways. But, even without further specialization, it does say something about AI. It dictates, for example,

It is interesting to note that these criteria, while simple, are not met by any previously designed AI system, let alone any existing working program.