DynaPsych Table of Contents


A System-Theoretic Analysisof Focused Cognition,

and its Implications forthe Emergence of Self and Attention

 

Ben Goertzel

Novamente LLC

November4, 2006


Abstract: A unifying framework for the description andanalysis of focused cognitive processes is presented, based on thesystem-theoretic notions of forward and backward synthesis.  Forward synthesis iteratively createscombinations, seeded from an initial focus-set of mental items; backwardsynthesis takes a set of mental items and tries to create them iteratively viaforward-synthesis.  The utility ofa dynamic involving alternating forward and backward synthesis isdiscussed.  The phenomenal self andthe shifting focus of attention, two critical aspects of cognitive systems, arehypothesized to emerge as strange attractors of this alternating dynamic.  In a companion paper, this framework isused to provide a systematic typology for the various cognitive agents in theNovamente AI system, including probabilistic inference, evolutionary learning,attention allocation, credit assignment, concept creation and others. 


1. Introduction

Human cognition is complex, involving a combination of multiple complexmechanisms with overlapping purposes. I conjecture that this complexity is not entirely the consequence of theÒmessinessÓ of evolved systems like the brain.  Rather, I suggest, any system that must perform advancedcognition under severe computational resource constraints will inevitablydisplay a significant level of complexity and multi-facetedness. 

This complexity, however, proves problematic for those attempting tograpple with the mind from the perspective of science and engineering.  In the context of experimentalpsychology, it means that controlled experiments are very rarely going to beable to get at the most interest aspects of intelligence and mind.  And in the context of AGI design(artificial general intelligence; see Goertzel and Pennachin, 2006), it meansthat it is very difficult to construct detailed low-level cognitive mechanismsin such a way as to give rise to desired high-level cognitive behaviors.  In this paper I will focus on the AGIaspect rather than the human-psychology aspect, but many of the same issuesexist in both cases.

I believe the complexity of mind can be grappled with effectively – both in humanpsychology and in AGI – but only if theorists and practitioners take moreof a system-theoretic perspective and seek to understand both natural andartificial intelligences as complex, self-organizing systems with dynamicsdominated by large-scale emergent structures.  In past writings (Goertzel, 1993, 1994, 1997, 2006) I havesought to take steps in this direction; and in this paper I will attempt topush this programme further, by discussing the complex systemic cognitivedynamics that I hypothesize to give rise to the critical emergent structures ofÒselfÓ and Òattention.Ó  These particular emergent structures are obviously critical for AGI, forhuman psychology and for mind science in general.

In (Goertzel, 2006) I follow up further on these concepts in an AGIcontext, showing how the systems theoretic notions introduced here may be usedto give a systematic typology of the cognitive mechanisms involved in theNovamente AGI architecture, and an explanation of why it seems plausible tohypothesize that a fully implemented Novamente system, if properly educated inthe context of an embodiment and shared environment, could give rise to selfand attention as emergent structures.

The first theoretical step taken here is to introduce the generalnotions of forward synthesis and backwardsynthesis. as an elaboration of thetheory of component-systems and self-generating systems proposed in ChaoticLogic; (Goertzel, 1994).   The hypothesis is made that thesegeneral schematic processes encompass all forms of focused cognition carried out in intelligent systems operating understrongly limited computational resources. Furthermore, I will lay stress here on the importance of the oscillatorydynamic in which forward and backward synthesis processes repeatedly followeach other.  This fundamentalcognitive dynamic, I will argue, is important among other reasons because ofthe attractors it leads to.  Thesecond key theoretical step taken here is the hypothesis that fundamentalemergent structures such as the self and the Òmoving bubble of focusedattentionÓ may be fruitfully conceptualized as strange attractors of thisoscillatory cognitive dynamic.

2. AGI and the Complexity of Cognition

 

The vast bulk of approaches to AI and even AGI, I feel, deal with theproblem of the complexity of cognition by essentially ignoring it.   For instance, many existingapproaches to AGI focus on some particular problem-solving mechanism and seekto utilize this mechanism for a broad variety of purposes, or to explain whythe problems solved well by this mechanism are the most critical ones.  The problem with this sort of approachis that, given the combination of complex functions that is required of an AGIsystem, it seems very difficult to find any single mechanism   that is capable of carrying outall the functions required of an AGI. As an example of this I will cite an AGI approach of which I am veryfond: WangÕs (1995, 2006) NARS system. NARS places uncertain logic at the center of intelligence, but myfeeling is that uncertain logic in itself is not adequate for procedurelearning nor for perceptual pattern recognition, to name just two of manyaspects.  Cyc (Lenat and Guha,1990) has a similar issue, placing crisp predicate logic at the center and thenseeking to augment it with a language-processing front end and context-specificBayesian nets, without confronting the issue that crisp theorem-proving may notbe adequate for carrying out most of the functionalities critical to generalintelligence. 

Other AGI approaches take a hybrid strategy, in which an overallarchitecture is posited and then various diverse algorithms and structures areproposed to fill in the various functions defined by the architecture.   The problem that arises here isthat, unless the different algorithms and structures are specifically designedto work effectively together, the odds of them interoperating productively in asufficient variety of real-world situations are small.  Much of intelligence consists ofemergent behaviors arising from the cooperative action of numerous complexproblem-solving mechanisms; but the appropriate emergent behaviors will notsimply emerge from the insertion of vaguely appropriate cognitive mechanismsinside each of a set of boxes defined by a high-level cognitive theory.  Rather, the cognitive mechanisms insideeach of the boxes must be specifically designed and tuned for operation in thewhole-cognitive-system context; i.e., with appropriate emergent behaviors andstructures in mind. 

To make this point, I will cite another AGI architecture of which I amvery fond: Stan FranklinÕs (2006) LIDA architecture. LIDA is a very wellthought out architecture which is well grounded in cognitive science research,but it is not clear to me whether the combination of learning mechanisms usedwithin LIDA is going to be appropriately chosen and tuned to give rise to theemergent structures and dynamics characteristic of general intelligence, suchas the self and the moving bubble of attention.  LIDA is a very general approach, which could be used as acontainer for a haphazard assemblage of learning techniques, or for a carefullyassembled combination of learning techniques designed to lead to appropriateemergence.  So, this is not acriticism of LIDA as such, but rather an argument that without concrete choicesregarding the specifics of the learning algorithms, it is not possible to tellwhether or not the LIDA system is going to be plausibly capable of a reasonablelevel of general intelligence.

The Novamente design seeks to avoid these various potential problemsvia the incorporation of a variety of cognitive mechanisms specificallydesigned for effective interoperation and for the induction of appropriateemergent behaviors and structures. I believe this approach is conceptually sound, however it does have thedrawback of leading to a rather complex design in which the accuratedescription and development of each component requires careful consideration ofall other components.  For thisreason it is worthwhile to seek simplification and consolidation of cognitivemechanisms, insofar as is possible. In this paper I introduce a conceptual framework that has been developedin order to provide a simplifying unifying perspective on the various cognitivemechanisms existing in the Novamente design, and an abstract and coherentargument regarding the dynamics by which these mechanisms may give rise toappropriate emergent structures. 

The framework presented here is a further development of thesystem-theoretic perspective on cognition introduced in Chaotic Logic (Goertzel, 1994) and reiterated in The HiddenPattern (Goertzel, 2006), and inspite of its origins in specific analysis of the Novamente system, is intendedto possess a more general applicability. 

In the last couple paragraphs I have explained the historical originsof the ideas to be presented here: the notions of forward and backward synthesiswere originated as part of an effort to simplify the collection of cognitivemechanisms utilized in the Novamente system.  These notions were then recognized as possessing potentiallymore general importance.  In theremainder of the paper I will proceed in the opposite direction: presentingforward and backward synthesis as general system-theoretic (and mathematical)notions, and exploring their general implications for the philosophy ofcognition.  In another paper(Goertzel, 2006) these are applied to provide a systematic typology of thecollection of Novamente cognitive processes. 

Furthermore, the (hypothesized, not yet observed in experiments)emergence of self and attention from the overall dynamics of the Novamentesystem, which in prior publications has largely been discussed either in verygeneral conceptual terms or else in terms of the specific interactions betweenspecific system components, may now be viewed as a particular case of thegeneral emergence of self and attention as strange attractors offorward-backward synthesis dynamics. This is often the sort of conclusion one wants to get out of systemstheory.  It rarely directly tellsone specific new things about specific systems -- but it frequently allows oneto better organize and understand specific things about specific systems, thusin some cases pointing the way to new discoveries.

3. Forward and Backward Synthesis as General CognitiveDynamics

The notion of forward andbackward synthesis presented here is an elaboration of a system-theoreticapproach to cognition developed by George Kampis and the author in the early1990Õs.  This section presentsforward and backward synthesis in this general context.

3.1. Component-Systems and Self-Generating Systems

Let us begin with the conceptof a Òcomponent-systemÓ, as described in George KampisÕs (1991) book Self-ModifyingSystems in Biology and Cognitive Science,and as modified into the concept of a Òself-generating systemÓ or SGS in ChaoticLogic.  Roughly speaking, a Kampis-style component-systemconsists of a set of components that combine with each other to form other,compound components.  The metaphorKampis uses is that of Lego blocks, combining to form bigger Lego structures.  Compound structures may in turn becombined together to form yet bigger compound structures.   A self-generating system isbasically the same concept as a component-system, but understood to becomputable, whereas Kampis claims that component-systems are uncomputable.

Next, in SGS theory there isalso a notion of reduction (not present in the Lego metaphor): sometimes whencomponents are combined in a certain way, a ÒreactionÓ happens, which may leadto the elimination of some of the components.  One relevant metaphor here is chemistry.  Another is abstract algebra: forinstance, if we combine a component f with its ÒinverseÓ component f-1,both components are eliminated. Thus, we may think about two stages in the interaction of sets ofcomponents: combination, and reduction. Reduction may be thought of as algebraic simplification, governed by aset of rules that apply to a newly created compound component, based on thecomponents that are assembled within it.

Formally, suppose {C1, C2,...}is the set of components present in a discrete-time component-system at timet.  Then, the components present attime t+1 are a subset of the set of components of the form

Reduce( Join (Ci(1),... ,Ci(r)))

where Join is a joiningoperation, and Reduce is a reduction operator.  The joining operation is assumed to map tuples of componentsinto components, and the reduction operator is assumed to map the space ofcomponents into itself.  Of course,the specific nature of a component system is totally dependent on theparticular definitions of the reduction and joining operators; below I willspecify these operators in the context of the Novamente AGI system, but for thepurpose of the general theoretical discussion in this section they may be leftgeneral.

It is also important that(simple or compound) components may have various quantitative properties.  Given appropriate theoreticalunderstanding, these properties may sometimes be inferred by knowing theingredients that went into making up a compound component, and the reductionsthat occurred.  Or, sometimes,experiments must be done on the component to calculate its quantitativeproperties.

3.2. Forward and Backward Synthesis

Now we move on to the mainpoint.  The basic idea put forth inthis paper is that all or nearly all focused cognitive processes areexpressible using two general process-schemata called forward and backwardsynthesis, to be presented below. The notion of Òfocused cognitive processÓ will be exemplified morethoroughly below, but in essence what is meant is a cognitive process thatbegins with a small number of items (drawn from memory or perception) as itsfocus, and has as its goal discovering something about these items, ordiscovering something about something else in the context of these items or ina way strongly biased by these items. This is different from, for example, a cognitive process whose goal ismore broadly-based and explicitly involves all or a large percentage of theknowledge in an intelligent systemÕs memory store.



Figure 1.  The General Process of ForwardSynthesis




The forward and backwardsynthesis processes as I conceive them, in the general framework of SGS theory,are as follows:

Forward synthesis:

  1. Begin with some initial components (the initial Òcurrent poolÓ), an additional set of components identified as ÒcombinatorsÓ (combination operators), and a goal function
  2. Combine the components in the current pool, utilizing the combinators, to form product components in various ways, carrying out reductions as appropriate, and calculating relevant quantities associated with components as needed
  3. Select the product components that seem most promising according to the goal function, and add these to the current pool (or else simply define these as the current pool)
  4. Return to Step 2


Figure 2.  The General Process of BackwardSynthesis


Backward synthesis:

  1. Begin with some components (the initial Òcurrent poolÓ), and a goal function
  2. Seek components so that, if one combines them to form product components using the
  3. combinators and then performs appropriate reductions, one obtains (as many as possible of) the components in the current pool
  4. Use the newly found constructions of the components in the current pool, to update the quantitative properties of the components in the current pool, and also (via the current pool) the quantitative properties of the components in the initial pool
  5. Out of the components found in Step 2, select the ones that seem most promising according to the goal function, and add these to the current pool (or else simply define these as the current pool)
  6. Return to Step 2

 

Less technically and moreconceptually, one may rephrase these process descriptions as follows:

 

Forward synthesis: Iteratively build compounds from the initialcomponent pool using the combinators, greedily seeking compounds that seemlikely to achieve the goal

 

Backward synthesis: Iteratively search (the systemÕslong-term memory) for component-sets that combine using the combinators to formthe initial component pool (or subsets thereof), greedily seekingcomponent-sets that seem likely to achieve the goal

 

      More formally, forward synthesis maybe specified as follows.  Let Xdenote the set of combinators, and let Y0 denote the initial pool ofcomponents (the initial focus of the cognitive process).  Given Yi, let Zidenote the set

Reduce( Join (Ci(1),... ,Ci(r)))

where the Ci aredrawn from Yi or from X. We may then say

 

Yi+1 = Filter(Zi)

 

where Filter is a functionthat selects a subset of its arguments. 

Backward synthesis, on theother hand, begins with a set W of components, and a set X of combinators, andtries to find a series Yi so that according to the process offorward synthesis, Yn=W.

In practice, of course, theimplementation of a forward synthesis process need not involve the explicitconstruction of the full set Zi.   Rather, the filtering operation takes place implicitlyduring hte construction of Yi+1.   The result, however, is that one gets some subset ofthe compounds producible via joining and reduction from the set of componentspresent in Yi plus the combinators X.

Conceptually one may viewforward-synthesis as a very generic sort of Ògrowth process,Ó andbackward-chaining as a very generic sort of Òfiguring out how to grow something.Ó  The intuitive idea underlying thepresent proposal is that these forward-going and backward-going ÒgrowthprocessesÓ are among the the essential foundations of cognitive control, andthat a conceptually sound design for cognitive control should explicitly makeuse of this fact.  To abstract awayfrom the details, what these processes are about is:

 

1.     taking the general dynamic of compound-formation andreduction as outlined in Kampis and Chaotic Logic

2.     introducing goal-directed pruning (ÒfilteringÓ) into thisdynamic so as to account for the limitations of computational resources thatare a necessary part of pragmatic intelligence

 

3.3. The Dynamic of Iterative Forward-Backward Synthesis

While forward and backwardsynthesis are both very useful on their own, they achieve their greatest powerwhen harnessed together.  It is myhypothesis that the dynamic pattern of alternating forward and backwardsynthesis has a fundamental role in cognition.  Put simply, forward synthesis creates new mental forms bycombining existing ones.  Then,backward synthesis seeks simple explanations for the forms in the mind,including the newly created ones; and, this explanation itself then comprisesadditional new forms in the mind, to be used as fodder for the next round offorward synthesis.  Or, to put ityet more simply:

 

É Combine É Explain É Combine É Explain ÉCombine É

 

It is not hard to expressthis alternating dynamic more formally, as well. 

Let X denote any set ofcomponents.

Let F(X) denote a set of componentswhich is the result of forward synthesis on X.

Let B(X) denote a set ofcomponents which is the result of backward synthesis of X.  We assume also a heuristic biasing thesynthesis process toward simple constructs.

Let S(t) denote a set of componentsat time t, representing part of a systemÕs knowledge base.

Let I(t) denote componentsresulting from the external environment at time t.

Then, we may consider adynamical iteration of the form

S(t+1) = B( F(S(t) + I(t)) )

This expresses the notion ofalternating forward and backward synthesis formally, as a dynamical iterationon the space of sets of components. We may then speak about attractors of this iteration: fixed points,limit cycles and strange attractors. One of the key hypotheses I wish to put forward here is that some keyemergent cognitive structures are strange attractors of this equation.  The iterative dynamic of combinationand explanation leads to the emergence of certain complex structures that are,in essence,  maintained when onerecombines their parts and then seeks to explain the recombinations.  These structures are built in the firstplace through iterative recombination and explanation, and then survive in themind because they are conserved by this process.  They then ongoingly guide the construction and destructionof various other temporary mental structures that are not so conserved.

4. Self and Focused Attention as Approximate Attractorsof the Dynamic of Iterated Forward/Backward Synthesis

In The Hidden Pattern I have argued that two key aspects of intelligenceare emergent structures that may be called the ÒselfÓ and the Òattentionalfocus.Ó[1]  These, it is suggested, are aspects ofintelligence that may not effectively be wired into the infrastructure of anintelligent system, though of course the infrastructure may be configured insuch a way as to encourage their emergence.  Rather, these aspects, by their nature, are only likely tobe effective if they emerge from the cooperative activity of various cognitiveprocesses acting within a broad based of knowledge. 

In the previous section Ihave described the pattern of ongoing habitual oscillation between forward andbackward synthesis as a kind of Òdynamical iteration.Ó   Here I will argue that both selfand attentional focus may be viewed as strange attractors of thisiteration.  The mode of argument isrelatively informal.  Referenceswill be given into the cognitive science literature, but the essentialprocesses under consideration are ones that are poorly understood from anempirical perspective, due to the extreme difficulty involved in studying themexperimentally.  For understandingself and attentional focus, we are stuck in large part with introspection,which is famously unreliable in some contexts, yet still (I feel) dramaticallybetter than having no information at all. Anyhow, the perspective on self and attentional focus given here is asynthesis of empirical and introspective notions, drawn largely from the publishedthinking and research of others but with a few original twists.

 

4.1. Self

The ÒselfÓ in the presentcontext refers to the Òphenomenal selfÓ (Metzinger, 2004) or Òself-modelÓ(Epstein, 1978).  That is, the selfis the model that a system builds internally, reflecting the patterns observedin the (external and internal) world that directly pertain to the systemitself.  As is well known ineveryday human life, self-models need not be completely accurate to be useful;and in the presence of certain psychological factors, a more accurateself-model may not necessarily be advantageous.  But a self-model that is too badly inaccurate will lead to abadly-functioning system that is unable to effectively act toward theachievement of its own goals. 

The value of a self-model forany intelligent system carrying out embodied agentive cognition isobvious.  And beyond this, anotherprimary use of the self is as a foundation for metaphors and analogies invarious domains.  Patternsrecognized pertaining the self are analogically extended to otherentities.  In some cases this leadsto conceptual pathologies, such as the anthropomorphization of trees, rocks andother such objects that one sees in some precivilized cultures.  But in other cases this kind of analogyleads to robust sorts of reasoning – for instance, in reading Lakoff andNunezÕs (2002) intriguing explorations of the cognitive foundations ofmathematics, it is pretty easy to see that most of the metaphors on which theyhypothesize mathematics to be based, are grounded in the mindÕsconceptualization of itself as a spatiotemporally embedded entity, which inturn is predicated on the mindÕs having a conceptualization of itself (a self)in the first place.

A self-model can in manycases form a self-fulfilling prophecy (to make an obviousdouble-entendreÕ!).   Actions are generated based on oneÕsmodel of what sorts of actions one can and/or should take; and the results ofthese actions are then incorporated into oneÕs self-model.  If a self-model proves a generally badguide to action selection, this may never be discovered, unless said self-modelincludes the knowledge that semi-random experimentation is often useful.

In what sense, then, may itbe said that self is an attractor of iterated forward-backward synthesis?  Backward synthesis infers the self fromobservations of system behavior. The system asks: What kind of system might I be, in order to give riseto these behaviors that I observe myself carrying out?   Based on asking itself thisquestion, it constructs a model of itself, i.e. it constructs a self.  Then, this self guides the systemÕsbehavior: it builds new logical relationships its self-model and various otherentities, in order to guide its future actions oriented toward achieving itsgoals.  Based on the behaviors newinduced via this constructive, forward-synthesis activity, the system may thenengage in backward synthesis again and ask: What must I be now, in order tohave carried out these new actions? And so on. 

My hypothesis is that afterrepeated iterations of this sort, in infancy, finally during early childhood akind of self-reinforcing attractor occurs, and we have a self-model that isresilient and doesnÕt change dramatically when new instances of action- orexplanation-generation occur.  This is not strictly a mathematical attractor, though, because over along period of time the self may well shift significantly.  But, for a mature self, many hundredsof thousands or millions of forward-backward synthesis cycles may occur beforethe self-model is dramatically modified. For relatively long periods of time, small changes within the context ofthe existing self may suffice to allow the system to control itselfintelligently.

Finally, it is interesting tospeculate regarding how self may differ in future AI systems as opposed to in humans.  The relative stability we see in humanselves may not exist in AI systems that can self-improve and change morefundamentally and rapidly than humans can.  There may be a situation in which, as soon as a system hasunderstood itself decently, it radically modifies itself and hence violates itsexisting self-model.  Thus:intelligence without a long-term stable self.  In this case the Òattractor-ishÓ nature of the self holdsonly over much shorter time scales than for human minds or human-like minds.  But the alternating process of forwardand backward synthesis for self-construction is still critical, even though noreasonably stable self-constituting attractor ever emerges. The psychology ofsuch intelligent systems will almost surely be beyond human beings' capacityfor comprehension and empathy.

4.2. Attentional Focus

Next, the notion of anÒattentional focusÓ is similar to BaarsÕ (1988) notion of a Global Workspace: acollection of mental entities that are, at a given moment, receiving far morethan the usual share of an intelligent systemÕs computational resources.  Due to the amount of attention paid toitems in the attentional focus, at any given moment these items are in largepart driving the cognitive processes going on elsewhere in the mind as well– because the cognitive processes acting on the items in the attentionalfocus are often involved in other mental items, not in attentional focus, aswell (and sometimes this results in pulling these other items into attentionalfocus).  An intelligent system mustconstantly shift its attentional focus from one set of entities to anotherbased on changes in its environment and based on its own shiftingdiscoveries. 

In the human mind, there is aself-reinforcing dynamic pertaining to the collection of entities in theattentional focus at any given point in time, resulting from the observationthat If A is in the attentional focus, and A and B have often beenassociated in the past, then odds are increased that B will soon be in theattentional focus.  This basic observation has been refined tremendouslyvia a large body of cognitive psychology work; and neurologically it followsnot only from HebbÕs (1949) classic work on neural reinforcement learning, butalso from numerous more modern refinements (Sutton and Barto, 1998).   But it implies that two items Aand B, if both in the attentional focus, can reinforce each othersÕ presence inthe attentional focus, hence forming a kind of conspiracy to keep each other inthe limelight.  But of course, thiskind of dynamic must be counteracted by a pragmatic tendency to remove itemsfrom the attentional focus if giving them attention is not providing sufficientutility in terms of the achievement of system goals.

The forward and backwardsynthesis perspective provides a more systematic perspective on thisself-reinforcing dynamic.  Forwardsynthesis occurs in the attentional focus when two or more items in the focusare combined to form new items, new relationships, new ideas.  This happens continually, as one of themain purposes of the attentional focus is combinational.  On the other hand, backward synthesisthen occurs when a combination that has been speculatively formed is thenlinked in with the remainder of the mind (the ÒunconsciousÓ, the vast body ofknowledge that is not in the attentional focus at the given moment intime).  Backward synthesisbasically checks to see what support the new combination has within theexisting knowledge store of the system. Thus, forward/backward synthesis basically comes down to Ògenerate andtestÓ, where the testing takes the form of attempting to integrate thegenerated structures with the ideas in the unconscious long-term memory.  One of the most obvious examples ofthis kind of dynamic is creative thinking (Boden, 1994; Goertzel, 1997), wherethe attentional focus continually combinationally creates new ideas, which arethen tested via checking which ones can be validated in terms of (built upfrom) existing knowledge. 

The backward synthesis stagemay result in items being pushed out of the attentional focus, to be replacedby others.  Likewise may theforward synthesis stage: the combinations may overshadow and then replace thethings combined.  However, in humanminds and functional AI minds, the attentional focus will not be a completechaos with constant turnover: sometimes the same set of ideas – or ashifting set of ideas within the same overall family of ideas -- will remain infocus for a while.  When thisoccurs it is because this set or family of ideas forms an approximate attractorfor the dynamics of the attentional focus, in particular for theforward/backward synthesis dynamic of speculative combination and integrativeexplanation.  Often, for instance,a small Òcore setÓ of ideas will remain in the attentional focus for a while,but will not exhaust the attentional focus: the rest of the attentional focuswill then, at any point in time, be occupied with other ideas related to theones in the core set.   Oftenthis may mean that, for a while, the whole of the attentional focus will movearound quasi-randomly through a Òstrange attractorÓ consisting of the set ofideas related to those in the core set.

5. Conclusion

The ideas presented above(the notions of forward and backward synthesis, and the hypothesis of self andattentional focus as attractors of the iterative forward-backward synthesisdynamic) are quite generic and are hypothetically proposed to be applicable toany cognitive system, natural or artificial.  In another paper (Goertzel, 2006), I get more specific and discuss themanifestation of the above ideas in the context of the Novamente AGIarchitecture.  I have found thatthe forward/backward synthesis approach is a valuable tool for conceptualizingNovamenteÕs cognitive dynamics. And, I conjecture that a similar utility may be found more generally.

Next, so as not to end on tooblaseÕ of a note, I will also make a stronger hypothesis.  My hypothesis is that, in order for aphysical or software system to achieve intelligence that is roughly human-levelin both capability and generality, using computational resources on the sameorder of magnitude as the human brain, this system must

 

 

To prove the truth of ahypothesis of this nature would seem to require mathematics fairly far beyondanything that currently exists. Nonetheless, however, I feel it is important to formulate and discusssuch hypotheses, so as to point the way for future investigations boththeoretical and pragmatic.

References

 

á     Baars, Bernard J. (1988). A Cognitive Theory of Consciousness. New York: CambridgeUniversity Press

á     Boden, Margeret (1994).  TheCreative Mind.  Routledge

á     Epstein, Seymour (1980). The Self-Concept: A Review and the Proposal ofan Integrated Theory of Personality, p. 27-39 in Personality: Basic Issuesand Current Research, Englewood Cliffs: Prentice-Hall

á     Goertzel (2006).  VirtualEaster Egg Hunting: A Thought-Experiment in  Embodied Social Learning, Cognitive Process Integration, andthe Dynamic Emergence of the Self , Proceedings of 2006 AGI Workshop, Bethesda MD,IOS Press

á     Goertzel, Ben (1993).  TheEvolving Mind. Gordon and Breach

á     Goertzel, Ben (1993).  TheStructure of Intelligence.  Springer-Verlag

á     Goertzel, Ben (1997).  FromComplexity to Creativity.  Plenum Press

á     Goertzel, Ben (1997).  ChaoticLogic.  Plenum Press

á     Goertzel, Ben (2002).  CreatingInternet Intelligence.  Plenum Press

á     Goertzel, Ben and Cassio Pennachin, Editors (2006).  Artificial General Intelligence.  Springer-Verlag.

á     Goertzel, Ben and Stephan Vladimir Bugaj (2006).  The Path to Posthumanity.  Academica Press

á     Goertzel, Ben (2006).  TheHidden Pattern: A Patternist Philosophy of Mind, Brown-Walker Press, toappear

á     Hebb, Donald (1949).  TheOrganization of Behavior.  Wiley

á     Kampis, George (1991).  Component-Systemsin Biology and Cognitive Science. Plenum Press

á     Lakoff, George and Rafael Nunez (2002).  Where Mathematics Comes From, Basic Books

á     Lenat, D. and R. V. Guha. (1990). Building Large Knowledge-BasedSystems: Representation and Inference in the Cyc Project. Addison-Wesley

á     Metzinger, Thomas (2004).  BeingNo One.  MIT Press

á     Ramamurthy, U., Baars, B., D'Mello, S. K., & Franklin, S. (2006).LIDA: A Working Model of Cognition. Proceedings of the 7th InternationalConference on Cognitive Modeling. Eds: Danilo Fum, Fabio Del Missier and AndreaStocco; pp 244-249. Edizioni Goliardiche, Trieste, Italy.

á     Sutton, Richard and Andrew Barto (1998).  Reinforcement Learning. MIT Press.

á     Wang, Pei (2006).  RigidFlexibility: The Logic of Intelligence. Springer-Verlag.

 



[1] The termÒattentional focusÓ is not used in (Goertzel, 1997), but the concept is thereunder the name Òperceptual-cognitive loop.Ó