DynaPsych Table of Contents


A System-Theoretic Analysis of Focused Cognition,

and its Implications for the Emergence of Self and Attention

 

Ben Goertzel

Novamente LLC

November 4, 2006


Abstract: A unifying framework for the description and analysis of focused cognitive processes is presented, based on the system-theoretic notions of forward and backward synthesis.  Forward synthesis iteratively creates combinations, seeded from an initial focus-set of mental items; backward synthesis takes a set of mental items and tries to create them iteratively via forward-synthesis.  The utility of a dynamic involving alternating forward and backward synthesis is discussed.  The phenomenal self and the shifting focus of attention, two critical aspects of cognitive systems, are hypothesized to emerge as strange attractors of this alternating dynamic.  In a companion paper, this framework is used to provide a systematic typology for the various cognitive agents in the Novamente AI system, including probabilistic inference, evolutionary learning, attention allocation, credit assignment, concept creation and others. 


1. Introduction

Human cognition is complex, involving a combination of multiple complex mechanisms with overlapping purposes.  I conjecture that this complexity is not entirely the consequence of the “messiness” of evolved systems like the brain.  Rather, I suggest, any system that must perform advanced cognition under severe computational resource constraints will inevitably display a significant level of complexity and multi-facetedness. 

This complexity, however, proves problematic for those attempting to grapple with the mind from the perspective of science and engineering.  In the context of experimental psychology, it means that controlled experiments are very rarely going to be able to get at the most interest aspects of intelligence and mind.  And in the context of AGI design (artificial general intelligence; see Goertzel and Pennachin, 2006), it means that it is very difficult to construct detailed low-level cognitive mechanisms in such a way as to give rise to desired high-level cognitive behaviors.  In this paper I will focus on the AGI aspect rather than the human-psychology aspect, but many of the same issues exist in both cases.

I believe the complexity of mind can be grappled with effectively – both in human psychology and in AGI – but only if theorists and practitioners take more of a system-theoretic perspective and seek to understand both natural and artificial intelligences as complex, self-organizing systems with dynamics dominated by large-scale emergent structures.  In past writings (Goertzel, 1993, 1994, 1997, 2006) I have sought to take steps in this direction; and in this paper I will attempt to push this programme further, by discussing the complex systemic cognitive dynamics that I hypothesize to give rise to the critical emergent structures of “self” and “attention.”   These particular emergent structures are obviously critical for AGI, for human psychology and for mind science in general.

In (Goertzel, 2006) I follow up further on these concepts in an AGI context, showing how the systems theoretic notions introduced here may be used to give a systematic typology of the cognitive mechanisms involved in the Novamente AGI architecture, and an explanation of why it seems plausible to hypothesize that a fully implemented Novamente system, if properly educated in the context of an embodiment and shared environment, could give rise to self and attention as emergent structures.

The first theoretical step taken here is to introduce the general notions of forward synthesis and backward synthesis. as an elaboration of the theory of component-systems and self-generating systems proposed in Chaotic Logic; (Goertzel, 1994).   The hypothesis is made that these general schematic processes encompass all forms of focused cognition carried out in intelligent systems operating under strongly limited computational resources.  Furthermore, I will lay stress here on the importance of the oscillatory dynamic in which forward and backward synthesis processes repeatedly follow each other.  This fundamental cognitive dynamic, I will argue, is important among other reasons because of the attractors it leads to.  The second key theoretical step taken here is the hypothesis that fundamental emergent structures such as the self and the “moving bubble of focused attention” may be fruitfully conceptualized as strange attractors of this oscillatory cognitive dynamic.

2. AGI and the Complexity of Cognition

 

The vast bulk of approaches to AI and even AGI, I feel, deal with the problem of the complexity of cognition by essentially ignoring it.   For instance, many existing approaches to AGI focus on some particular problem-solving mechanism and seek to utilize this mechanism for a broad variety of purposes, or to explain why the problems solved well by this mechanism are the most critical ones.  The problem with this sort of approach is that, given the combination of complex functions that is required of an AGI system, it seems very difficult to find any single mechanism   that is capable of carrying out all the functions required of an AGI.  As an example of this I will cite an AGI approach of which I am very fond: Wang’s (1995, 2006) NARS system.  NARS places uncertain logic at the center of intelligence, but my feeling is that uncertain logic in itself is not adequate for procedure learning nor for perceptual pattern recognition, to name just two of many aspects.  Cyc (Lenat and Guha, 1990) has a similar issue, placing crisp predicate logic at the center and then seeking to augment it with a language-processing front end and context-specific Bayesian nets, without confronting the issue that crisp theorem-proving may not be adequate for carrying out most of the functionalities critical to general intelligence. 

Other AGI approaches take a hybrid strategy, in which an overall architecture is posited and then various diverse algorithms and structures are proposed to fill in the various functions defined by the architecture.   The problem that arises here is that, unless the different algorithms and structures are specifically designed to work effectively together, the odds of them interoperating productively in a sufficient variety of real-world situations are small.  Much of intelligence consists of emergent behaviors arising from the cooperative action of numerous complex problem-solving mechanisms; but the appropriate emergent behaviors will not simply emerge from the insertion of vaguely appropriate cognitive mechanisms inside each of a set of boxes defined by a high-level cognitive theory.  Rather, the cognitive mechanisms inside each of the boxes must be specifically designed and tuned for operation in the whole-cognitive-system context; i.e., with appropriate emergent behaviors and structures in mind. 

To make this point, I will cite another AGI architecture of which I am very fond: Stan Franklin’s (2006) LIDA architecture. LIDA is a very well thought out architecture which is well grounded in cognitive science research, but it is not clear to me whether the combination of learning mechanisms used within LIDA is going to be appropriately chosen and tuned to give rise to the emergent structures and dynamics characteristic of general intelligence, such as the self and the moving bubble of attention.  LIDA is a very general approach, which could be used as a container for a haphazard assemblage of learning techniques, or for a carefully assembled combination of learning techniques designed to lead to appropriate emergence.  So, this is not a criticism of LIDA as such, but rather an argument that without concrete choices regarding the specifics of the learning algorithms, it is not possible to tell whether or not the LIDA system is going to be plausibly capable of a reasonable level of general intelligence.

The Novamente design seeks to avoid these various potential problems via the incorporation of a variety of cognitive mechanisms specifically designed for effective interoperation and for the induction of appropriate emergent behaviors and structures.  I believe this approach is conceptually sound, however it does have the drawback of leading to a rather complex design in which the accurate description and development of each component requires careful consideration of all other components.  For this reason it is worthwhile to seek simplification and consolidation of cognitive mechanisms, insofar as is possible.  In this paper I introduce a conceptual framework that has been developed in order to provide a simplifying unifying perspective on the various cognitive mechanisms existing in the Novamente design, and an abstract and coherent argument regarding the dynamics by which these mechanisms may give rise to appropriate emergent structures. 

The framework presented here is a further development of the system-theoretic perspective on cognition introduced in Chaotic Logic (Goertzel, 1994) and reiterated in The Hidden Pattern (Goertzel, 2006), and in spite of its origins in specific analysis of the Novamente system, is intended to possess a more general applicability. 

In the last couple paragraphs I have explained the historical origins of the ideas to be presented here: the notions of forward and backward synthesis were originated as part of an effort to simplify the collection of cognitive mechanisms utilized in the Novamente system.  These notions were then recognized as possessing potentially more general importance.  In the remainder of the paper I will proceed in the opposite direction: presenting forward and backward synthesis as general system-theoretic (and mathematical) notions, and exploring their general implications for the philosophy of cognition.  In another paper (Goertzel, 2006) these are applied to provide a systematic typology of the collection of Novamente cognitive processes. 

Furthermore, the (hypothesized, not yet observed in experiments) emergence of self and attention from the overall dynamics of the Novamente system, which in prior publications has largely been discussed either in very general conceptual terms or else in terms of the specific interactions between specific system components, may now be viewed as a particular case of the general emergence of self and attention as strange attractors of forward-backward synthesis dynamics.  This is often the sort of conclusion one wants to get out of systems theory.  It rarely directly tells one specific new things about specific systems -- but it frequently allows one to better organize and understand specific things about specific systems, thus in some cases pointing the way to new discoveries.

3. Forward and Backward Synthesis as General Cognitive Dynamics

The notion of forward and backward synthesis presented here is an elaboration of a system-theoretic approach to cognition developed by George Kampis and the author in the early 1990’s.  This section presents forward and backward synthesis in this general context.

3.1. Component-Systems and Self-Generating Systems

Let us begin with the concept of a “component-system”, as described in George Kampis’s (1991) book Self-Modifying Systems in Biology and Cognitive Science, and as modified into the concept of a “self-generating system” or SGS in Chaotic Logic.  Roughly speaking, a Kampis-style component-system consists of a set of components that combine with each other to form other, compound components.  The metaphor Kampis uses is that of Lego blocks, combining to form bigger Lego structures.  Compound structures may in turn be combined together to form yet bigger compound structures.   A self-generating system is basically the same concept as a component-system, but understood to be computable, whereas Kampis claims that component-systems are uncomputable.

Next, in SGS theory there is also a notion of reduction (not present in the Lego metaphor): sometimes when components are combined in a certain way, a “reaction” happens, which may lead to the elimination of some of the components.  One relevant metaphor here is chemistry.  Another is abstract algebra: for instance, if we combine a component f with its “inverse” component f-1, both components are eliminated.  Thus, we may think about two stages in the interaction of sets of components: combination, and reduction.  Reduction may be thought of as algebraic simplification, governed by a set of rules that apply to a newly created compound component, based on the components that are assembled within it.

Formally, suppose {C1, C2,...} is the set of components present in a discrete-time component-system at time t.  Then, the components present at time t+1 are a subset of the set of components of the form

Reduce( Join (Ci(1), ... ,Ci(r)))

where Join is a joining operation, and Reduce is a reduction operator.  The joining operation is assumed to map tuples of components into components, and the reduction operator is assumed to map the space of components into itself.  Of course, the specific nature of a component system is totally dependent on the particular definitions of the reduction and joining operators; below I will specify these operators in the context of the Novamente AGI system, but for the purpose of the general theoretical discussion in this section they may be left general.

It is also important that (simple or compound) components may have various quantitative properties.  Given appropriate theoretical understanding, these properties may sometimes be inferred by knowing the ingredients that went into making up a compound component, and the reductions that occurred.  Or, sometimes, experiments must be done on the component to calculate its quantitative properties.

3.2. Forward and Backward Synthesis

Now we move on to the main point.  The basic idea put forth in this paper is that all or nearly all focused cognitive processes are expressible using two general process-schemata called forward and backward synthesis, to be presented below.  The notion of “focused cognitive process” will be exemplified more thoroughly below, but in essence what is meant is a cognitive process that begins with a small number of items (drawn from memory or perception) as its focus, and has as its goal discovering something about these items, or discovering something about something else in the context of these items or in a way strongly biased by these items.  This is different from, for example, a cognitive process whose goal is more broadly-based and explicitly involves all or a large percentage of the knowledge in an intelligent system’s memory store.



Figure 1.  The General Process of Forward Synthesis




The forward and backward synthesis processes as I conceive them, in the general framework of SGS theory, are as follows:

Forward synthesis:

  1. Begin with some initial components (the initial “current pool”), an additional set of components identified as “combinators” (combination operators), and a goal function
  2. Combine the components in the current pool, utilizing the combinators, to form product components in various ways, carrying out reductions as appropriate, and calculating relevant quantities associated with components as needed
  3. Select the product components that seem most promising according to the goal function, and add these to the current pool (or else simply define these as the current pool)
  4. Return to Step 2


Figure 2.  The General Process of Backward Synthesis


Backward synthesis:

  1. Begin with some components (the initial “current pool”), and a goal function
  2. Seek components so that, if one combines them to form product components using the
  3. combinators and then performs appropriate reductions, one obtains (as many as possible of) the components in the current pool
  4. Use the newly found constructions of the components in the current pool, to update the quantitative properties of the components in the current pool, and also (via the current pool) the quantitative properties of the components in the initial pool
  5. Out of the components found in Step 2, select the ones that seem most promising according to the goal function, and add these to the current pool (or else simply define these as the current pool)
  6. Return to Step 2

 

Less technically and more conceptually, one may rephrase these process descriptions as follows:

 

Forward synthesis: Iteratively build compounds from the initial component pool using the combinators, greedily seeking compounds that seem likely to achieve the goal

 

Backward synthesis: Iteratively search (the system’s long-term memory) for component-sets that combine using the combinators to form the initial component pool (or subsets thereof), greedily seeking component-sets that seem likely to achieve the goal

 

      More formally, forward synthesis may be specified as follows.  Let X denote the set of combinators, and let Y0 denote the initial pool of components (the initial focus of the cognitive process).  Given Yi, let Zi denote the set

Reduce( Join (Ci(1), ... ,Ci(r)))

where the Ci are drawn from Yi or from X.  We may then say

 

Yi+1 = Filter(Zi)

 

where Filter is a function that selects a subset of its arguments. 

Backward synthesis, on the other hand, begins with a set W of components, and a set X of combinators, and tries to find a series Yi so that according to the process of forward synthesis, Yn=W.

In practice, of course, the implementation of a forward synthesis process need not involve the explicit construction of the full set Zi.   Rather, the filtering operation takes place implicitly during hte construction of Yi+1.   The result, however, is that one gets some subset of the compounds producible via joining and reduction from the set of components present in Yi plus the combinators X.

Conceptually one may view forward-synthesis as a very generic sort of “growth process,” and backward-chaining as a very generic sort of “figuring out how to grow something.”  The intuitive idea underlying the present proposal is that these forward-going and backward-going “growth processes” are among the the essential foundations of cognitive control, and that a conceptually sound design for cognitive control should explicitly make use of this fact.  To abstract away from the details, what these processes are about is:

 

1.     taking the general dynamic of compound-formation and reduction as outlined in Kampis and Chaotic Logic

2.     introducing goal-directed pruning (“filtering”) into this dynamic so as to account for the limitations of computational resources that are a necessary part of pragmatic intelligence

 

3.3. The Dynamic of Iterative Forward-Backward Synthesis

While forward and backward synthesis are both very useful on their own, they achieve their greatest power when harnessed together.  It is my hypothesis that the dynamic pattern of alternating forward and backward synthesis has a fundamental role in cognition.  Put simply, forward synthesis creates new mental forms by combining existing ones.  Then, backward synthesis seeks simple explanations for the forms in the mind, including the newly created ones; and, this explanation itself then comprises additional new forms in the mind, to be used as fodder for the next round of forward synthesis.  Or, to put it yet more simply:

 

… Combine … Explain … Combine … Explain … Combine …

 

It is not hard to express this alternating dynamic more formally, as well. 

Let X denote any set of components.

Let F(X) denote a set of components which is the result of forward synthesis on X.

Let B(X) denote a set of components which is the result of backward synthesis of X.  We assume also a heuristic biasing the synthesis process toward simple constructs.

Let S(t) denote a set of components at time t, representing part of a system’s knowledge base.

Let I(t) denote components resulting from the external environment at time t.

Then, we may consider a dynamical iteration of the form

S(t+1) = B( F(S(t) + I(t)) )

This expresses the notion of alternating forward and backward synthesis formally, as a dynamical iteration on the space of sets of components.  We may then speak about attractors of this iteration: fixed points, limit cycles and strange attractors.  One of the key hypotheses I wish to put forward here is that some key emergent cognitive structures are strange attractors of this equation.  The iterative dynamic of combination and explanation leads to the emergence of certain complex structures that are, in essence,  maintained when one recombines their parts and then seeks to explain the recombinations.  These structures are built in the first place through iterative recombination and explanation, and then survive in the mind because they are conserved by this process.  They then ongoingly guide the construction and destruction of various other temporary mental structures that are not so conserved.

4. Self and Focused Attention as Approximate Attractors of the Dynamic of Iterated Forward/Backward Synthesis

In The Hidden Pattern I have argued that two key aspects of intelligence are emergent structures that may be called the “self” and the “attentional focus.”[1]  These, it is suggested, are aspects of intelligence that may not effectively be wired into the infrastructure of an intelligent system, though of course the infrastructure may be configured in such a way as to encourage their emergence.  Rather, these aspects, by their nature, are only likely to be effective if they emerge from the cooperative activity of various cognitive processes acting within a broad based of knowledge. 

In the previous section I have described the pattern of ongoing habitual oscillation between forward and backward synthesis as a kind of “dynamical iteration.”   Here I will argue that both self and attentional focus may be viewed as strange attractors of this iteration.  The mode of argument is relatively informal.  References will be given into the cognitive science literature, but the essential processes under consideration are ones that are poorly understood from an empirical perspective, due to the extreme difficulty involved in studying them experimentally.  For understanding self and attentional focus, we are stuck in large part with introspection, which is famously unreliable in some contexts, yet still (I feel) dramatically better than having no information at all.  Anyhow, the perspective on self and attentional focus given here is a synthesis of empirical and introspective notions, drawn largely from the published thinking and research of others but with a few original twists.

 

4.1. Self

The “self” in the present context refers to the “phenomenal self” (Metzinger, 2004) or “self-model” (Epstein, 1978).  That is, the self is the model that a system builds internally, reflecting the patterns observed in the (external and internal) world that directly pertain to the system itself.  As is well known in everyday human life, self-models need not be completely accurate to be useful; and in the presence of certain psychological factors, a more accurate self-model may not necessarily be advantageous.  But a self-model that is too badly inaccurate will lead to a badly-functioning system that is unable to effectively act toward the achievement of its own goals. 

The value of a self-model for any intelligent system carrying out embodied agentive cognition is obvious.  And beyond this, another primary use of the self is as a foundation for metaphors and analogies in various domains.  Patterns recognized pertaining the self are analogically extended to other entities.  In some cases this leads to conceptual pathologies, such as the anthropomorphization of trees, rocks and other such objects that one sees in some precivilized cultures.  But in other cases this kind of analogy leads to robust sorts of reasoning – for instance, in reading Lakoff and Nunez’s (2002) intriguing explorations of the cognitive foundations of mathematics, it is pretty easy to see that most of the metaphors on which they hypothesize mathematics to be based, are grounded in the mind’s conceptualization of itself as a spatiotemporally embedded entity, which in turn is predicated on the mind’s having a conceptualization of itself (a self) in the first place.

A self-model can in many cases form a self-fulfilling prophecy (to make an obvious double-entendre’!).   Actions are generated based on one’s model of what sorts of actions one can and/or should take; and the results of these actions are then incorporated into one’s self-model.  If a self-model proves a generally bad guide to action selection, this may never be discovered, unless said self-model includes the knowledge that semi-random experimentation is often useful.

In what sense, then, may it be said that self is an attractor of iterated forward-backward synthesis?  Backward synthesis infers the self from observations of system behavior.  The system asks: What kind of system might I be, in order to give rise to these behaviors that I observe myself carrying out?   Based on asking itself this question, it constructs a model of itself, i.e. it constructs a self.  Then, this self guides the system’s behavior: it builds new logical relationships its self-model and various other entities, in order to guide its future actions oriented toward achieving its goals.  Based on the behaviors new induced via this constructive, forward-synthesis activity, the system may then engage in backward synthesis again and ask: What must I be now, in order to have carried out these new actions?  And so on. 

My hypothesis is that after repeated iterations of this sort, in infancy, finally during early childhood a kind of self-reinforcing attractor occurs, and we have a self-model that is resilient and doesn’t change dramatically when new instances of action- or explanation-generation occur.   This is not strictly a mathematical attractor, though, because over a long period of time the self may well shift significantly.  But, for a mature self, many hundreds of thousands or millions of forward-backward synthesis cycles may occur before the self-model is dramatically modified.  For relatively long periods of time, small changes within the context of the existing self may suffice to allow the system to control itself intelligently.

Finally, it is interesting to speculate regarding how self may differ in future AI systems as opposed to in humans.  The relative stability we see in human selves may not exist in AI systems that can self-improve and change more fundamentally and rapidly than humans can.  There may be a situation in which, as soon as a system has understood itself decently, it radically modifies itself and hence violates its existing self-model.  Thus: intelligence without a long-term stable self.  In this case the “attractor-ish” nature of the self holds only over much shorter time scales than for human minds or human-like minds.  But the alternating process of forward and backward synthesis for self-construction is still critical, even though no reasonably stable self-constituting attractor ever emerges. The psychology of such intelligent systems will almost surely be beyond human beings' capacity for comprehension and empathy.

4.2. Attentional Focus

Next, the notion of an “attentional focus” is similar to Baars’ (1988) notion of a Global Workspace: a collection of mental entities that are, at a given moment, receiving far more than the usual share of an intelligent system’s computational resources.  Due to the amount of attention paid to items in the attentional focus, at any given moment these items are in large part driving the cognitive processes going on elsewhere in the mind as well – because the cognitive processes acting on the items in the attentional focus are often involved in other mental items, not in attentional focus, as well (and sometimes this results in pulling these other items into attentional focus).  An intelligent system must constantly shift its attentional focus from one set of entities to another based on changes in its environment and based on its own shifting discoveries. 

In the human mind, there is a self-reinforcing dynamic pertaining to the collection of entities in the attentional focus at any given point in time, resulting from the observation that If A is in the attentional focus, and A and B have often been associated in the past, then odds are increased that B will soon be in the attentional focus.  This basic observation has been refined tremendously via a large body of cognitive psychology work; and neurologically it follows not only from Hebb’s (1949) classic work on neural reinforcement learning, but also from numerous more modern refinements (Sutton and Barto, 1998).   But it implies that two items A and B, if both in the attentional focus, can reinforce each others’ presence in the attentional focus, hence forming a kind of conspiracy to keep each other in the limelight.  But of course, this kind of dynamic must be counteracted by a pragmatic tendency to remove items from the attentional focus if giving them attention is not providing sufficient utility in terms of the achievement of system goals.

The forward and backward synthesis perspective provides a more systematic perspective on this self-reinforcing dynamic.  Forward synthesis occurs in the attentional focus when two or more items in the focus are combined to form new items, new relationships, new ideas.  This happens continually, as one of the main purposes of the attentional focus is combinational.  On the other hand, backward synthesis then occurs when a combination that has been speculatively formed is then linked in with the remainder of the mind (the “unconscious”, the vast body of knowledge that is not in the attentional focus at the given moment in time).  Backward synthesis basically checks to see what support the new combination has within the existing knowledge store of the system.  Thus, forward/backward synthesis basically comes down to “generate and test”, where the testing takes the form of attempting to integrate the generated structures with the ideas in the unconscious long-term memory.  One of the most obvious examples of this kind of dynamic is creative thinking (Boden, 1994; Goertzel, 1997), where the attentional focus continually combinationally creates new ideas, which are then tested via checking which ones can be validated in terms of (built up from) existing knowledge. 

The backward synthesis stage may result in items being pushed out of the attentional focus, to be replaced by others.  Likewise may the forward synthesis stage: the combinations may overshadow and then replace the things combined.  However, in human minds and functional AI minds, the attentional focus will not be a complete chaos with constant turnover: sometimes the same set of ideas – or a shifting set of ideas within the same overall family of ideas -- will remain in focus for a while.  When this occurs it is because this set or family of ideas forms an approximate attractor for the dynamics of the attentional focus, in particular for the forward/backward synthesis dynamic of speculative combination and integrative explanation.  Often, for instance, a small “core set” of ideas will remain in the attentional focus for a while, but will not exhaust the attentional focus: the rest of the attentional focus will then, at any point in time, be occupied with other ideas related to the ones in the core set.   Often this may mean that, for a while, the whole of the attentional focus will move around quasi-randomly through a “strange attractor” consisting of the set of ideas related to those in the core set.

5. Conclusion

The ideas presented above (the notions of forward and backward synthesis, and the hypothesis of self and attentional focus as attractors of the iterative forward-backward synthesis dynamic) are quite generic and are hypothetically proposed to be applicable to any cognitive system, natural or artificial.  In another paper (Goertzel, 2006), I get more specific and discuss the manifestation of the above ideas in the context of the Novamente AGI architecture.  I have found that the forward/backward synthesis approach is a valuable tool for conceptualizing Novamente’s cognitive dynamics.  And, I conjecture that a similar utility may be found more generally.

Next, so as not to end on too blase’ of a note, I will also make a stronger hypothesis.  My hypothesis is that, in order for a physical or software system to achieve intelligence that is roughly human-level in both capability and generality, using computational resources on the same order of magnitude as the human brain, this system must

 

 

To prove the truth of a hypothesis of this nature would seem to require mathematics fairly far beyond anything that currently exists.  Nonetheless, however, I feel it is important to formulate and discuss such hypotheses, so as to point the way for future investigations both theoretical and pragmatic.

References

 

Š      Baars, Bernard J. (1988). A Cognitive Theory of Consciousness. New York: Cambridge University Press

Š      Boden, Margeret (1994).  The Creative Mind.  Routledge

Š      Epstein, Seymour (1980). The Self-Concept: A Review and the Proposal of an Integrated Theory of Personality, p. 27-39 in Personality: Basic Issues and Current Research, Englewood Cliffs: Prentice-Hall

Š      Goertzel (2006).  Virtual Easter Egg Hunting: A Thought-Experiment in  Embodied Social Learning, Cognitive Process Integration, and the Dynamic Emergence of the Self , Proceedings of 2006 AGI Workshop, Bethesda MD, IOS Press

Š      Goertzel, Ben (1993).  The Evolving Mind. Gordon and Breach

Š      Goertzel, Ben (1993).  The Structure of Intelligence.  Springer-Verlag

Š      Goertzel, Ben (1997).  From Complexity to Creativity.  Plenum Press

Š      Goertzel, Ben (1997).  Chaotic Logic.  Plenum Press

Š      Goertzel, Ben (2002).  Creating Internet Intelligence.  Plenum Press

Š      Goertzel, Ben and Cassio Pennachin, Editors (2006).  Artificial General Intelligence.  Springer-Verlag.

Š      Goertzel, Ben and Stephan Vladimir Bugaj (2006).  The Path to Posthumanity.  Academica Press

Š      Goertzel, Ben (2006).  The Hidden Pattern: A Patternist Philosophy of Mind, Brown-Walker Press, to appear

Š      Hebb, Donald (1949).  The Organization of Behavior.  Wiley

Š      Kampis, George (1991).  Component-Systems in Biology and Cognitive Science.  Plenum Press

Š      Lakoff, George and Rafael Nunez (2002).  Where Mathematics Comes From, Basic Books

Š      Lenat, D. and R. V. Guha. (1990). Building Large Knowledge-Based Systems: Representation and Inference in the Cyc Project. Addison-Wesley

Š      Metzinger, Thomas (2004).  Being No One.  MIT Press

Š      Ramamurthy, U., Baars, B., D'Mello, S. K., & Franklin, S. (2006). LIDA: A Working Model of Cognition. Proceedings of the 7th International Conference on Cognitive Modeling. Eds: Danilo Fum, Fabio Del Missier and Andrea Stocco; pp 244-249. Edizioni Goliardiche, Trieste, Italy.

Š      Sutton, Richard and Andrew Barto (1998).  Reinforcement Learning.  MIT Press.

Š      Wang, Pei (2006).  Rigid Flexibility: The Logic of Intelligence.  Springer-Verlag.

 



[1] The term “attentional focus” is not used in (Goertzel, 1997), but the concept is there under the name “perceptual-cognitive loop.”