Back to Ben Goertzel's Research Papers and Essays

FACES OF PSYCHOLOGICAL COMPLEXITY

(Second draft of an introductory chapter for the forthcoming edited volume Complex Systems and Cognitive Science)

Ben Goertzel

Psychology Department
University of Western Australia


1. INTRODUCTION

What is a complex system? Debates as to the formal definition of complexity continue, but the underlying concept is clear. We are talking about systems whose dynamics are dominated by nonlinear interactions between their parts, and whose behavior can be understood in terms of emergent properties that ensue from these interactions. This is a crucial area of study, for in the real world, subtle interdependence is the rule rather than the exception. The vast majority of interesting properties of real-world systems have to do with nonlinearity, interdependence and emergence.

Until very recently the focus of science has been predominantly "reductionist" -- i.e., based on the analysis of complex systems in terms of the behavior of relatively independent components. Intricate webs of interdependence, the focus of most pre-scientific belief systems, have been from the point of view of science things to be avoided or ignored. The General Systems theorists of the middle decades of this century sought to change this, and promoted a more holistic kind of science, but their efforts met with disappointingly little success. With the advent of modern computer technology, however, the situation has improved considerably. We now have a tool for studying holistic properties of complex adaptive systems. And we are originating new theoretical concepts to go along with this tool, with astonishing rapidity.

Complex systems ideas have proved useful in all areas of natural science, from solid state physics to chemistry to computer engineering to evolutionary and organismic biology and computer science (Green and Bossomaier, 1992; Stonier and Lin, 1994). It is only natural that they should also be applied to that most complex of systems, the mind/brain.

The real question is perhaps, not whether the mind is a complex system, but whether it is too complex for the current tools of complex systems science. Are we up to the challenge of modeling mind as a complex system? I believe that the answer is yes. The papers in this volume are evidence in favor of this contention.

The Discipline of Complex Systems Science

It is quite possible that, just as the past few decades have brought us departments of computer science, over the next half-century there will spring up university departments of complexity science. On the other hand, it seems more likely to this author that the study of complex systems will never crystallize into an individual discipline, but will rather diffuse through all branches of science. In other words, rather than complexity science becoming a full-fledged science of its own, science as a whole may become complexity science.

Some complexity science researchers are explicitly motivated by the search for general, unifying principles of complex system behavior. A great deal of media attention has been attracted by some of these claims -- for instance, by the work of Santa Fe Institute researchers on the "edge of chaos." But, although the search for such unifying principles is a valuable and valid long-term research goal, it must be emphasized that the existence of such principles is still a completely open question.

The papers in this volume are concerned so much with complexity science as a discipline unto itself, nor with unifying laws for complex systems, but rather with complexity science as it manifests itself in the particular study of psychological systems. The study of complex psychological systems has been largely neglected by the Santa Fe Institute, the Center for Complex Systems, and other acknowledged centers for complex systems research. It has been promoted by various societies and organizations, e.g. in the U.S. by the Society for Chaos Theory in Psychology, and in Australia by a community of individuals associated with the Australian Mathematical Psychology and Australian Complex Systems conferences. By and large, however, the infusion of complexity science ideas into psychology has been a widespread, self-organizing phenomenon, involving a loosely connected web of researchers around the world.

The Fragmentation of Psychology

Complex systems theory is relevant to every single area of scientific inquiry. I would argue, however, that no discipline is in more desperate need of complex systems models and ideas than psychology. In psychology, analysis-based methods have been much less successful than they have been in the physical sciences. Even the most elementary questions seem to require an holistic approach.

No other branch of science is quite so fragmented as psychology. Different aspects of mental function -- e.g. memory, perception, action, learning -- are modeled in terms of almost completely different concepts, when it is clear that, in the mind/brain, these operations are carried out in a relatively unified and synergetic way. Cognitive psychologists construct models of mental processes; behaviorists deny or at least ignore the existence of abstract mental processes; neuropsychologists use information about neurochemistry and neuroanatomy to arrive at their own, largely insular theories. In sum, as a large number of writers have observed, psychology really has no unifying paradigm. There is no theoretical perspective capable of incorporating and rendering consistent all the different forms of psychological knowledge. I believe that complexity science has the potential to help psychology overcome this fragmentation and become a truly unified theory of the mind. This is a long-term research goal similar to, but more specific than, the search for unifying principles of complex systems.

One might well question the need for a unifying, holistic theoretical perspective. Why one perspective only, why not a plethora of overlapping views? This response has a certain sense to it, but in the end it is not sufficient. For, not only is psychology fragmented, but, by comparison to natural sciences, it is also exceedingly shallow. There is no way to tell whether psychologists will ever uncover "natural laws" as deep and productive as those which physical scientists have found. Perhaps such laws do not exist. But it seems clear that, if they do exist, the only way we are going to find them is to begin by constructing a unifying paradigm within which hypothetical general psychological laws can be framed.

The quest for a unifying psychological paradigm and deep psychological laws also has an importance beyond psychology proper. For instance, it has a central significance for the project of artificial intelligence. So far, as a rule, AI researchers have proceeded in a largely ad hoc fashion, cobbling together structures loosely modeled on human thought processes with algorithms chosen for their mathematical elegance and efficiency. It has been almost impossible to determine which aspects of human thought are necessary for intelligence, and which aspects are just special-case "kludges" improvised by evolution. A unifying paradigm for psychology would clarify this issue, distinguishing the universal structures of intelligence from the system-specific mechanisms used to achieve this structure. And general laws of psychology, if they were to be discovered, would be even more useful: they would help us to design artificial intelligences, by giving us tools with which to predict the behaviors of different classes of intelligent systems.

Complex Systems and Cognitive Science

In recent years, the emerging discipline of cognitive science has shown some signs that it might be appropriate for this crucial but slippery role of "unifying psychological paradigm." Despite the word "cognitive" in its name, cognitive science has found many applications beyond the study of cognition, not only in the study of perceptual and motor systems, but also in such realms as the psychology of religion (Arbib and Hesse, 1985) and the psychology of emotion (Mandler, 1985). The basic idea underlying all these applications is to use computational or "information-processing" models to join together abstract psychological theory with data from diverse sources: experimental psychology, neuroscience, artificial intelligence, or clinical psychology.

But, although cognitive science is a very powerful methodology, those who would interpret it as a unifying framework for psychology have one serious obstacle to overcome: the fact that cognitive science itself is not a coherent and unified field of study. Contemporary cognitive science is a loose, sometimes uneasy combination of ideas from different fields. At very least, it decomposes into three fairly separate areas: cognitive psychology, artificial intelligence, and cognitive neuroscience. Like psychology as a whole, it lacks a unifying theoretical perspective, because it lacks a language and methodology for dealing with complex, multifaceted systems as wholes.

In particular, cognitive science suffers from an overemphasis on the brain/mind dichotomy. Some researchers strongly favor an abstract, computational view of the mind, and feel that, for the psychologist, knowledge of underlying neural mechanisms is almost irrelevant (Fodor, 1988). In this view, thoughts are the software, neurons are the hardware, and there is no close relation between the two, excepting the fact that the software runs on the hardware. In order to figure out how a computer program works, would you analyze the molecular structure of a silicon chip?

On the other hand, some researchers strongly favor a focus on neural mechanisms, arguing that the specific operations of the brain must be understood in terms of its specific physical properties (e.g. Edelman, 1988; Lynch, 1988). The fact that neural systems can be approximated by computational systems is considered to be largely irrelevant to the project of understanding mind. Thoughts and feelings, it is believed, can only be explained in terms of the details of synapses, neurotransmitters, and so forth. Mental processses are considered to be more like assembly language programs than like programs in a high-level programming language: they maintain a certain degree of abstraction, but they are fundamentally tied to the underlying hardware.

Not surprisingly, the brain-oriented view is prevalent among neurobiologists. The pro-computational view was at one point almost universal among computer scientists and cognitive psychologists, but the advent of neural network models has brought a significant percentage of these invididuals over to the "brain" camp. The intriguing thing about complexity science is that it applies on both the neural level and the computational level. Indeed, it promotes a unified view of the mind/brain that makes the distinction between the two levels seem somewhat arbitrary. For instance, neural networks and rule-based production systems have typically been portrayed as two opposing approaches to artificial intelligence and cognitive modelling. But classifier systems combine aspects of both in a very natural way: they are systems of rules, the strengths of which evolve by equations similar to the equations of neural networks.

The important thing, in the complex systems view, is not whether one is looking at a network of neurons or a network of algorithms or rules; it is the emergent properties of the network. Given that the mind is so easily conceived as a collection of emergent properties of the brain, complex systems science, with its focus on emergence, would seem a particularly appropriate match for cognitive science, and for mind science as a whole. This is the motivating concept behind most of the research reported here.

Four Faces of Complexity in Psychology

In particular, there are four different levels on which complex systems science may assist in the enterprise of psychology. I call these the "four faces of complexity in psychology": modeling of underlying neural mechanisms, modeling of abstract psychological phenomena, analysis of complex experimental data, and clarification of philosophical foundations. Of course, there is a great deal of overlap between these categories, but I believe there are enough differences to merit making the distinction. On the crudest level, these four faces might be said to correspond to cognitive theory, cognitive neuroscience, experimental psychology, and philosophy of mind.

It is by working on these four levels at once that complex systems science may be able to transform the science of psychology. What psychology lacks is precisely an overarching perspective, bringing together brain, mind, and behavior. Complexity science applies to all of these areas, thus giving rise to a wonderfully interconnected web of detailed insights into the behavior of particular psychological systems.

In this Introduction, I will give a summary of some crucial ideas from complex systems science, and then briefly review each of these three areas. Finally, I will summarize the contributions to these areas made by the papers in this book.

2. SOME CONCEPTS OF COMPLEXITY SCIENCE

If one wishes to construct a synopsis of the diverse and rapidly changing field of complex systems science, there are at least two possible strategies one may take. On the one hand, one may focus on models. There are only a handful of complex systems models that have achieved any kind of popularity, and these models can be used to illustrate all the necessary ideas. Or, on the other hand, one may focus on abstract concepts, and treat the models as particular ways of realizing these concepts. My preference is for the latter approach.

What I mean by the standard complex systems models are neural networks, cellular automata, genetic algorithms, nonlinear differential equations, and nonlinear iterations. A more complete list would include a few other models, such as, for example, classifier systems, L-systems and Boolean automata networks. There is a great range of complex systems models in the literature -- Varela's (1978) form dynamics, Kampis's (1991) component-systems and my own magician systems (Goertzel, 1994) come to mind. But, so far, only a handful of computational models have really caught on.

A great deal has been learned from experimenting with these standard models, and the majority of the papers in this volume deal with one or another of them. However, I would argue that none of these models is really ideally suited for the modelling of psychological systems. The view to which I subscribe, which is perhaps an eccentric one, is that the use of these models is a sort of interim strategy for complex systems-theoretic psychology. Eventually, I believe, psychology will have to develop its own complex systems models, tailored to deal with complex networks of interdependent psychological entities.

One of the key insights of complexity science is that the choice of an underlying model is far from all-important. All the different complex systems models seem to display the same kinds of behaviors. Each one has its own peculiar features, its strengths and weaknesses. But yet, strikingly similar emergent phenomena can be seen in all of them -- in neural networks, cellular automata, genetic algorithms, nonlinear dynamical systems, etc. It is these emergent phenomena, I would argue, that constitute the essence of complexity science.

As yet, we have no systematic scientific language for discussing emergent phenomena in complex systems. What we do have, however, is a family of overlapping languages, emerging simultaneously from different disciplines. From this family of overlapping languages a few key concepts have emerged: for instance, the concepts of attraction, chaos, adaptation, bifurcation and autopoiesis. It is on these key concepts that I will focus here, in this brief review of complex systems science.

Attractors

An attractor is, quite simply, a characteristic behavior of a dynamical system (i.e. of a system which changes over time). It is a striking and important fact that, for many mathematical and real-world dynamical systems, the precise details of the initial state are almost irrelevant. No matter where the system starts from, it will eventually drift into one of a small set of characteristic behaviors, a small number of attractors. Excellent references for the study of attractors and dynamical systems theory generally are (Devaney, 1988) and (Abraham et al, 1991).

Some systems have fixed point attractors, meaning that they drift into certain "equilibrium" conditions and stay there. Some systems have periodic attractors, meaning that, after an initial transient period, they lock into a cyclic pattern of oscillation between a certain number of fixed states. And finally, some systems have attractors that are neither fixed points nor limit cycles, and are hence called "strange attractors." The most complex systems possess all three kinds of attractors, so that different initial conditions lead not only to different behaviors, but to different types of behavior. Also, to complicate things further, strange attractors come supplied with "invariant measures," indicating the frequency with which different regions of the attractor are visited. Some dynamical systems have very simple attractors hosting subtly structured invariant measures.

Some dynamical systems are chaotic, meaning that, despite being at bottom deterministic, they are capable of passing many statistical tests for randomness. They appear random. Under some definitions of "strange attractor," dynamics on a strange attractor are necessarily chaotic; under my very general understanding of the term, however, this need not be the case. There are several different technical definitions of chaos; the hallmark of chaos, however, is what the physicists call sensitive dependence on initial conditions. This means that system states which are very similar to each other can lead to system states which are very different from each other. Combined with a bounded set of system states, this property will tend to lead to chaotic dynamics.

Each attractor has a "basin" which consists of the collection of system states that will eventually lead to that attractor. The size of a basin determines, roughly speaking, how often the corresponding attractor will be seen. Bifurcations occur when a system rests right between the basins of two attractors. A tiny change in the system's state can then push the system in one direction or the other.

To illustrate these ideas, let us consider a very simple example. The simplest type of nonlinear iteration is the quadratic. In one dimension a quadratic iteration takes the familiar form

x --> ax2 + bx + c

The standard example is the logistic,

x -> ax(1-x)

The dynamics of this iteration depends on the parameter a. For instance, taking a=1, one finds a system which rapidly converges to the fixed point x=1. Taking 3<a<3.6, one finds a system which settles into cyclic behavior. On the other hand, taking a=4, the iteration produces a trajectory that is chaotic -- totally unpredictable, just like a coin toss. Of course, the iteration is not creating randomness, but it is taking the randomness inherent in the digits of the "most any number x," and steadfastly refusing to impose any order.

In two dimensions the quadratic is a little less familiar, but the basic idea is the same. The equation is uglier, but the pictures, being in two dimensions rather than one, are much prettier! The formula is

x --> ax2 + bx + c + dxy + ey + fy2

y --> gx2 + hx + i + jxy + ky + ly2

Instead of three coefficients, one now has twelve, labeled a through l. A small but nonzero percentage of coefficient choices will lead to chaotic attractors. And each time one varies the coefficients, the shape of the attractor obtained from repeated iterations varies. A small change in coefficient can cause a large change in picture.

Each of the pictures in Figure 1 has two numbers beside it. One is the fractal dimension -- a quantity which measures the "thinness" of the attractor. A line has fractal dimension 1, an ordinary region of the plane like a filled-in square has fractal dimension two; these attractors lie somewhere inbetween. The other number is the Liapunov exponent -- a number which quantifies the amount of chaos in a system, by measuring the rate at which nearby trajectories diverge from each other. A chaotic system has a positive Liapunov exponent, and, roughly speaking, the larger the exponent, the more chaotic the system is.

Adaptation

Another key idea of complex systems science is evolutionary adaptation. As originally formulated by Darwin and Wallace, the theory of evolution by natural selection applied only to species. But since that time, natural selection has been seen to play a crucial role in a number of other contexts as well. Perhaps the most dramatic example is Burnet's theory of clonal selection, the foundation of modern immunology, which states that immune systems continually self-regulate by a process of natural selection (Burnet, 1959). More speculatively, Nobel Prize-winning immunologist Gerald Edelman (1988) has proposed a similar explanation of brain dynamics, his theory of "neuronal group selection" or Neural Darwinism, mentioned above.

In computer science and cognitive modelling, we have the genetic algorithm and classifier systems, which do optimization and learning by simulated natural selection. The very origin of life is thought to have been a process of molecular evolution. Kenneth Boulding, among many others, has used evolution to explain economic dynamics. Finally, extending the evolution principle to the realm of culture, Richard Dawkins has defined a "meme" as an idea which replicates itself effectively and thus survives over time. Using this language, we may say that natural selection itself has been a very powerful meme.

The essence of evolution is progressive change induced by the need to adapt to surroundings. Beyond this very general statement, it is difficult to give a general formulation of the eovlutionary process. For instance, a common view is that an evolutionary system necessarily consists of "replicators" which thrive to different degrees, and whose replicative process leaves room for some sort of variation. But there is at least one evolutionary model, Edelman's neuronal group selection, which does not fit this picture: it has no replication at all, but relies on an hypothesis of redundancy, according to which any form that is needed is already there in the brain.

In the perspective of complexity science, the crux of evolution does not consist in any particular collection of mechanisms, but rather in the emergence of webs of mutually inter-adapting entities. Thus, the term "survival of the fittest" is entirely apropos, but only if one interprets "fittest" to mean "fits in best with its surroundings." For a systematical elaboration of this view of evolution, see (Goertzel, 1993).

Autopoiesis

Finally, let us turn to the concept of autopoiesis. This is the least familiar of my "key concepts of complex systems science," but I believe it is no less essential than the others. The coinage of biologist Humberto Maturana (see Varela, 1978), it refers to the ability of complex systems to produce themselves. The paradigm case of autopoiesis is the biological organism, which consists of a collection of interconnected parts precisely designed so as to be able to support and produce each other. Another example, just as apt, is the modern economy. No individual is self-sufficient, and only a small percentage are truly parasitic; directly or indirectly most individuals rely for their lifestyle on the actions of most other individuals.

Yet another example, discussed in (Goertzel, 1994), is the belief system. No one who has argued with a "true believer" in some closed belief system will doubt the autopoiesis of belief systems. Every point in the true believer's argument is backed up by other points in their argument. Of course, in essence all the points are only backed up by one another, but this doesn't hurt the true believer's argument any: there is no way to attack all fifty thousand points at once, so they will never lose the argument. This is an example of the danger of autopoiesis. Autopoiesis in belief systems can serve a positive role; it can serve to maintain a useful belief system during times which do not happen to offer it with much external support. But when autopoiesis is a belief system's main means of support, what one has is a parasitic belief system.

Autopoiesis relates with adaptation, in an obvious way. Evolution serves to adapt systems, and thus to "improve" them in various ways, but in most real-world situations it only acts in the context of autopoiesis. Evolutionary adaptations which destroy autopoiesis will not lead to viable structures. On the other hand, evolutionary adaptations which lead to stronger, more stable autopoiesis will tend to lead to structures that survive. This is clearly what is seen in the evolution of organisms.

And autopoiesis also relates to attraction, in the sense that an autopoietic system is, itself, often an attractor. The process of evolving an autopoietic system is a process of autopoietic attraction. Starting from a certain initial population of systems, evolution makes variations by its operations of mutation and sexual reproduction. Some of these variations are successful, some or not. Gradually, the successful ones begin to lead along a path -- a path toward an attractor of the evolutionary dynamic, which represents a new autopoietic system.

Mind as a Complex System

The general view of psychological systems toward which complexity science points is, it would seem, quite clear. It is rarely articulated in an explicit way, but would rather seem to exist as a sort of gestalt in the community of researchers working on complex systems models in psychology. In (Goertzel, 1994) I have sought to articulate this view of mind in a formal way, as well as to elaborate it in various directions; a summary and extension of this work is given in Chapter 2 of this volume.

First of all, one wants to say that psychological entities -- thoughts, feelings, desires, memories -- are attractors of dynamical systems. At bottom they are attractors of neural dynamical systems, but they may also be understood as attractors of more abstract dynamical systems, systems of computational processes which represent the "middle-level" structure of the brain.

One wants to say that these attractors do not remain constant -- they evolve over time. They adapt to their environment, which consists of other attractors in the mind and also, in the case of perceptual/motor attractors, of the external world. In order to adapt they must have some kind of ability for pseudorandom variation.

Finally, these attractors are combined in a complex network. They keep each other alive -- i.e., they are an autopoietic system. This overarching network of attractors may have a number of different large-scale architectural principles. Prominent among these are such things as an hierarchical structure, a fractal structure of attractors within attractors within attractors, and an associative structure in which attractors tend to interact with related attractors.

This is a very general view of the mind, which may be articulated in ordinary language, without any technical vocabulary. In its holism and its emphasis on interdependence and change, it has many commonalities with ancient Oriental theories of mind. However, complex systems science gives us, for the first time, a scientifically rigorous way of discussing and exploring this kind of approach to the mind. This is the wonder of complexity science. Unlike, say, quantum physics or relativity, complex systems theory does not open dazzling new worlds up to scientific investigation. Rather, it enables science to deal with the more common-sensical aspects of the universe. It acknowledges the complex, chaotic, self-organizing world that the human race has known, intuitively, all along.

3. FACES OF COMPLEXITY IN PSYCHOLOGY

If the general philosophy of mind suggested by complex systems science is clear, the steps required to turn this philosophy into a concrete scientific theory are not. If one could rigorously demonstrate the mind to be a self-organizing, autopoietic network of adaptive attractors, then one would have achieved the long-term goal mentioned in Section 1: one would have constructed a solid unifying paradigm for psychology. But there is no obvious way to demonstrate such abstract statements using the current tools of experimental psychology.

What needs to be done as a first step is, it would seem, to demonstrate the relevance of the core concepts of complexity science to the study of psychological systems. This may be done using any of the standard complex systems models, or using new models specifically tailored for psychology. The point is to demonstrate that the language of attractors, chaos, adaptation, autopoiesis, and so forth, is indeed appropriate for the description and analysis of the mind/brain. Then, once we have observed enough specific examples of the manifestation of these phenomena in psychological systems, the next step on the path should become clear.

The Point of View from Cognitive Neuroscience

The unified brain/mind picture suggested by complexity science is supported by recent insights in cognitive neuroscience. Recent investigations using PET and fMRI brain scanning have given us unprecedented insight into the structure and dynamics of the brain, and its psychological implications. Based on their extensive work with brain scanning technology, and on a survey of the cognitive science literature, Michael Posner and his colleagues (Posner and Raichle, 1994) have given a very specific and useful list of principles governing brain function. A brief discussion of these principles is an excellent way to begin our discussion of the applications of complexity science to the mind/brain:

1. Elementary mental operations are located in discrete neural areas.

2. Cognitive tasks are performed by a network of widely distributed neural systems.

3. Computations in a network interact by means of "re-entrant" processes

4. Hierarchical control is a property of network operation.

5. Activation of a computation produces a temporary reduction in the threshold for its reactivation.

6. When a computation is repreated its reduced threshold is accompanied by reduced effort and less attention.

7. Activating a computation from sensory input (bottom-up) and from attention (top-down) involves many of the same neurons.

8. Practice in the performance of any computation will decrease the neural networks necessary to perform it.

Principle 1 is seen very clearly in PET scans, as well as in classical brain lesion studies. Doing different activities, such as hearing words versus seeing words, activates different areas of the brain. However, this "localization" does not mean that each activity is carried out in a single isolated region of the brain. This points toward one of the deepest puzzles of cortical structure: the local/global paradox. On the one hand, we know that on the surface of the cerebral cortex, numerous specific areas can be associated with specific functions. But on the other hand, it has been repeatedly demonstrated that the behavior of an animal is degraded equally no matter what part of the cortex is removed -- the degree of degradation depends on the amount of cortex removed, but not the specific areas. As a memory system, the cortex is somehow halfway between a filing cabinet and a hologram -- it is structured in such a way as to be localized and holistic at the same time!

Next, in Principles 2-4, it is pointed out that there is a structure to this network of re-entrant, distributed neural operations. In some cases there is an hierarchical control structure, in the sense that activation of one area by a "controller" area inhibits activation in other, competing areas. Principle 7 states that, as PET studies demonstrate in detail, one manifestation of this hierarchical structure is a shared perceptual-motor hierarchy.

The remaining principles relate to pattern-recognition in the general sense. They state, in essence, that what the brain does is to recognize patterns. Principle 5 is an abstract version of classical conditioning; it is the basis for neural habituation. It states that components of the brain are more receptive to stimuli similar to those they have received in the recent past. Principles 6 and 8 are corollaries of this, which are observed in PET scans by reduced blood flow and reduced activation of attention systems in the presence of habituated stimuli. Principle 8 in particular provides a satisfying connection between neuroscience and algorithmic information theory (Goertzel, 1993; Chaitin, 1986). For what it says is that, once the brain has recognized something as a repeated pattern, it will use less energy to do that thing. Thus, where the brain is concerned, energy becomes approximately proportional to subjective complexity. Roughly speaking, one may gauge the neural complexity of a behavior by the amount of energy that the brain requires to do it.

Posner's principles span the neural level and the abstract, cognitive level. They portray the brain as a structured network of interacting, distributed neural systems, modifying each other, competing and cooperating with each other, and recognizing patterns in each other. By extension, they portray the mind as a structured network of interacting processes, involved in mutual dynamics of modification, competition, cooperation and pattern recognition. What complex systems science has to add to this picture is, most of all, a more precise understanding of the dynamics of these neural and mental process systems. Attraction, adaptation, autopoiesis, bifurcation and related ideas can help us to understand what these process systems are actually doing.

Neural Network Models

Neural network models are without doubt the best developed interface between complexity science and psychology. These models run the gamut from biologically accurate models of single neurons to blatantly unrealistic combinations of "formal neurons" intended for specific purposes such as vision processing. They have become a standard part of computer science, and for close to a decade they have assumed a dominant role in the discipline of cognitive science.

What is the status of neural network models today? In computer science and engineering, they grow more powerful and more popular every year. In psychology, however, things are more complicated. One might say that the "honeymoon" phase for connectionist models in psychology is over, and a more interesting, potentially more productive period has begun.

It has been realized that neural network models can be configured and trained to simulate essentially any precisely specifiable behavior, so that, in order to be doing psychology rather than computer science, one must ask more of a connectionist model than mere ability to solve some problem. One wants the model to, for instance, display a similar pattern of errors to human beings, or a similar developmental pattern. And the number of cases in which such similarities can in fact be demonstrated is striking. For instance, in the area of language, neural networks trained to produce speech and then lesioned display errors similar to those of dyslexics. Aphasia and epilepsy can also be simulated in similar ways. In the area of vision, neural networks can not only carry out such processes as edge detection and object recognition; they can also explain the "form constants" underlying the hallucinations seen on LSD. Furthermore, neural network models enable us to explore commonalities between different psychological tasks -- e.g., the same neural mechanisms used for vision processing can also be used to explain aspects of language acquisition (Goertzel and Goertzel, 1995).

The moral would seem to be that, while neural network models are only very rough models of the structure and dynamics of the brain, they nevertheless fall into a category of "brain-like" systems, systems displaying various properties similar to those of the brain. Of course, the category of brain-like systems is a fuzzy one, and much current work focuses on making neural networks more accurate models of neural systems, not only on the single neuron level, but on the level of overall network architecture and dynamics.

Not all work with neural networks is oriented toward complexity and emergence. For instance, it is possible to use the backpropagation method for training feedforward neural networks as nothing more than a nonlinear curve-fitting algorithm. In this case, neural networks are a more easily accessible alternative to techniques like polynomial regression, but they do not reveal anything about the subtle interdependences of the mind/brain. On the other hand, much recent research focuses on neural networks which display particularly interesting emergent behavior. For example, neural networks with chaotic dynamics have received a great deal of attention lately: Freeman (1993) has demonstrated their utility as models of the olfactory cortex, and Alexander (1996) has explored their general utility as learning systems. What is clear is that neural networks remain a vital tool for modeling of psychological systems, on many different levels.

The papers in this volume deal with the brain on several different levels. First, George Christos, a mathematician, attempts to use one of the standard neural network models from the engineering and physics community -- the Hopfield model -- to address serious psychological and physiological issues. He takes the Crick-Mitchison hypothesis, that dreaming corresponds to reverse learning in neural networks, and develops it with a thoroughness and rigor that exceeds that found in Crick and Mitchison's original development.

Looking one level higher, Garry Flint, in his paper, takes up Walter Freeman's nonlinear dynamics model of the olfactory cortex, and extends it to the whole brain in an attempt to account for the dynamics of therapeutic change. Freeman's model is based on EEG recordings of rabbits' olfactory systems, and also on computer simulations of neural networks using the Hodgkin-Huxley equations. Flint's extrapolation of Freeman's ideas is metaphorical yet precise, and is backed up by clinical examples.

Finally, Paul Watters presents some intriguing analyses of EEG data, elicited from subjects under different sorts of conditions -- e.g. stimulated by caffeine, or stimulated by thinking they have had caffeine. It is shown that different states of mind can be differentiated in terms of their effects on the dynamics of EEG. As to the relation between these EEG dynamics, the underlying neural dynamics studied by Christos, and the global brain dynamics posited by Flint, it is difficult to make generalizations, but clearly work on all these levels is important if we ever wish to achieve a truly unified dynamical picture of the brain.

Abstract Psychological Models

If the category of brain-like systems is large enough to contain all manner of formal neural network models, is it perhaps not even broader -- say, broad enough to contain some models that have no obvious relation to the brain whatsoever? The answer to this question appears to be yes. Although they may have no formal neurons and synapses, complex computational or mathematical systems may still display many of the same overlying structural and dynamical patterns that the brain does. This is the motivation behind the more abstract varieties of cognitive modelling, as distinct from neural modelling.

For instance, Fred Abraham (1991) and his colleagues have used a variety of nonlinear differential equations models from mathematical dynamical systems theory to study psychological systems. The parameters of these systems correspond to parameters of real psychological systems (often chemical parameters), and the dynamical behaviors of these systems can be quite psychologically realistic.

Robert Gregson (1995), on the other hand, has used nonlinear recursions in the complex plane to simulate psychophysical phenomena. He considers a complex cubic iteration called Gamma, of the form

Yn+1 = -a(Yn2 + e2)(Yn-1)

This iteration is used as a psychophysical transfer function, a model of a single psychophysical channel, where initial value Y0 and parameter e are fixed, the stimulus is U = ma + b and the response is Re[Yn] for some fixed n. The imaginary part of Y is used to couple different channels together; sometimes the parameter e in one channel is also coupled with the parameter a in another. The results of this nonlinear emulation of psychophysics are qualitatively similar to real psychophysical data. They explain why power laws seem to hold in certain cases, and also they explain such phenomena as crossmodal interference and visual illusions.

In this volume we have two papers developing Gregson's model: a review by Takuo Henmi, summarizing some of Gregson's earlier work and discussing various extensions to it; and Gregson's own paper, giving a remarkably convincing analysis of line bisection under lateral visuo-spatial neglect. As should be clear from these two papers, psychophysics is perhaps the most promising area for the detailed development of complex systems models at present. One can expect to see work in this area develop quickly, and perhaps provide inspiration to those working in other areas, such as cognition, as well.

Neither Abraham's differential equations nor Gregson's Gamma recursion is intended to represent any particular neural system. The idea is, rather, that some of the masses of neurons involved in mental processing may collectively display dynamics that match the dynamics of simple mathematical equations. Just as the visual cortex contains neural subsystems that compute Fourier transforms (Pribram, 1991), other parts of the brain seem to emulate different mathematical transformations.

A different type of abstract psychological complex systems model is provided by the genetic algorithm and classifier systems (Goldberg, 1988; Holland et al, 1986). These are evolutionary models: they model the processes of creativity and learning as processes of progressive natural selection. The connection between evolution and learning is an old one, but until the advent of these models, it was not possible to examine the relationship except in an extremely oversimplified way.

A genetic algorithm is a population of abstract entities, which reproduce sexually and asexually and are selected according to some abstract notion of "fitness." A population can thus be evolved to satisfy any specified criterion. John Koza has used this approach to evolve computer programs serving a variety of different functions. Looking back toward the brain, GA's are closely related to Edelman's model of neuronal group selection, the key difference being that Edelman's model admits only asexual reproduction. Classifier systems, on the other hand, incorporate GA's into a rule-based psychological model: they are systems of rules which send each other messages and reward each other, and periodically reproduce as in the genetic algorithm. They are an elegant synthesis of complex systems ideas with the rule-based cognitive models that were prevalent in the 1970's and early 1980's.

Mark Randell, in his paper for this volume, compares the behavior of classifier systems and human subjects on the task of control complex dynamical systems (cellular automata). His work is unusual in the area of decision theory in that it considers complex decision making, rather than decision making in simple, artifical environments. There is complexity of the environment, and there is complexity of the learning system within the environment; and what is really interesting in the mind lies at the interface of these two complexities.

There are also other types of models; for instance, there is line of thought followed by systems-theory oriented researchers such as Francisco Varela (1978), George Kampis (1991) and the author (Goertzel, 1994), which focuses on the ability of psychological processes to organize and produce one another. The self-production of mental systems is a century-old philosophical idea, but without the conceptual tools of complex systems science, there was no way to explore it in a detailed and rigorous way. Now we may understand self-producing thought systems as attractors, and may proceed to inquire about their structure and basin size in a systematic way. In my article in this section, "A Metric for Conceptual Similarity," I use this system-theoretic view of the mind to approach one of the central questions of contemporary psychology, namely: what is it that makes two mental entities similar? A novel metric for conceptual similarity is proposed, specific examples are given, and connections are drawn to the theory of analogical reasoning.

Coming to Grips with Complex Data

In data analysis, the goal is not to construct accurate models of the processes underlying behaviors, but rather to arrive at comprehensible and abstract characterizations of which behaviors are present. The purpose of experimentation is, in most cases, not to find numbers but to find patterns. All too often, however, standard methods of data analysis fail to elicit the subtle patterns underlying complex data.

Without adequate methods of data analysis, one has no way to test one's psychological models. For, in most cases, it is too much to ask that a model should produce an exact numerical fit to human data. Instead, one wants to ask that the model produces data which are qualitatively or structurally similar to human data. But in order to do this, one must have ways of getting at the qualitative, structural patterns underlying data. Classical statistical methods are often inadequate for this task; complexity science provides alternatives that are in many cases more satisfactory.

The importance of new methods for data analysis to experimental psychology can hardly be overestimated. As compared with natural sciences, the quantity and quality of data obtained from psychological experiments is generally very poor, and thus a great amount of ingenuity is often needed in order to tease meaningful patterns from the results of experiments. There is a tendency to replicate old experiments whenever a new method of data analysis become available -- the raw data from the original experiments being too difficult to find, or too hard to work with.

The standard method of dealing with psychological data is linear statistics. Of course, nonlinear statistical methods have a venerable history in psychology, but their use has never extended beyond a small community of sophisticates, and for relatively good reasons: in practice, the extra effort involved in using them is only rarely compensated by correspondingly greater insight. Nonlinear dependencies are generally dealt with by single- or multi-dimensional scaling. Since the data involved is generally noisy, it is easy to write off all deviations from one's simplified model as "noise."

The conventional methods of psychological data analysis are not to be scoffed at: it is no easy thing to make sense of small, noisy data sets. Natural scientists, blessed with much better data, have far fewer obstacles in the way of understanding their phenomena of interest. What has happened, however, in one case after another, is that inadequate methods of data analysis have led the complex interdependences that make psychological systems creative, productive and interesting to be written off as "noise," or else just swept under the rug.

A classic case is psychophysics, as highlighted by Gregson's work, mentioned above. The standard scaling and curve-fitting paradigm has led to simplified "psychophysical laws" like Stevens' power law, which are fairly accurate in a certain range, but inaccurate outside this range. The non-power-law behavior outside the "good" range then becomes a special phenomenon, to be briefly mentioned and then ignored. But in fact, as Gregson's nonlinear psychophysics reveals, this non-power-law behavior is a consequence of the same complex dynamics that gives rise to the appearance of power-law behavior in certain circumstances (Gregson, 1988). Some of the "noise" that mars the fit of the power law to actual data may also be explained by recourse to an underlying level of nonlinear dynamics. And such phenomena as cross-modal interference, which are inexplicable by simple curve-fitting models, become not only explicable but inevitable once one moves to this deeper level.

Complex systems science shifts the focus of data analysis, away from the search for simple correspondences between numerical variables. It encourages us to search for, on the one hand, hidden, underlying nonlinear mechanisms and, on the other hand, high-level emergent patterns. It acknowledges that, as chaos theory has taught us, apparent noise may ensue from underlying deterministic simplicity; and that underlying noisy complexity may give rise to simple emergent patterns. In moving toward complexity-oriented methods of data analysis we leave behind some of the security of linear statistics, in which everything is "guaranteed" by easily-computed confidence intervals. However, we gain something that is, in the long run, vastly more important: fundamental insight.

Naohito Chino's paper spans the areas of complex systems modeling and data analysis. It deals with similarity judgements, how they should be represented and measured. Chino's dynamical scaling method represents an intriguing new approach to the complexity of psychological data: one uses a dynamical system to help one to understand the data elicited from studying a dynamical system.

Next, Scheir and Tsascher's paper deals with the question: What do you do when the time series data is so complex that ordinary numerical and statistical methods fail to yield information? They propose a radical and intriguing answer: Look at the rise or fall of order in the dynamical system one is studying. Using an entropy-based measure of order, they apply this method to the study of therapeutic change, with striking results.

Finally, Gwen Goertzel's and my paper in this section presents a new technique for the analysis of complex time series data, called the Chaos Language Algorithm. This algorithm is based on symbolic dynamics, which is, in essence, a way of studying the structure of strange attractors using formal language theory. The basic idea of symbolic dynamics is to divide the state space of a dynamical system into a finite number of regions or "cells," numbered 1 through N. One then charts each trajectory of the system as an itinerary of cells. If a trajectory goes from cell 14 to cell 3 to cell 21 to cell 9, and so forth..., its symbol sequence will begin {14,3,21,9,...}. The collection of code sequences obtained in this way can tell one a great deal about the nature of the dynamics. They may be viewed as the "corpus" of some unknown language, and subjected to mathematical, statistical and linguistic analysis. This is a newly emerging method which will, I hope, bear fruit in practical data analysis over the next few decades.

Complexity, Consciousness and Philosophy of Mind

Finally, the last section of the book deals with the broader implications of complexity science -- the implications of complexity for the "softer" aspects of psychology, and for the philosophy of mind. Complexity science is an highly technical endeavor, and yet its implications are as much conceptual and philosophical as they are mathematical and scientific. Complexity science urges us to think about the mind in a whole new way -- it focuses us on complexity and emergence, rather than on individual variables.

Allan Combs, in his brief paper, reviews the possible implications of complexity for the study of consciousness. Complexity, he believes, ties in with age-old views of consciousness as a process rather than an entity. Despite the somewhat different vocabulary, these ideas resonate rather well with George Christos's article on attractor neural networks.

Charles Card paints a broad picture of the role of archetypes in modern day science, with a focus on the science of complexity. The concept of "archetype" arose in Jungian psychology and for half a century has been largely dismissed by research psychologists. However, it seems possible that, with the science of complexity, the concept may enjoy a resurgence, in some modified form. The patterns emerging among various complex systems, whatever one calls them, have many similarities to what Jung called "archetypes." The mind moves in strange and wondrous ways!

In my own article in this section, I bring the consciousness and archetypes threads together, and propose that there are certain "archetypal" structures of consciousness, which are given by the quaternionic and octonionic division algebras. This ideas is seen to join together the system-theoretic models of mind presented in my previous work with ideas from Oriental psychology, and the contemporary psychology of short-term memory. Finally, Onar Aam's brief article develops similar ideas regarding the division algebras, but from a more philosophical vantage point.

REFERENCES