DynaPsych Table of Contents


 

 

The Dynamics of Consciousness in Webmind

Ben Goertzel

 

Synopsis.  This article reviews the concept of consciousness, and its relevance to the engineering of the Webmind AI system.  It briefly describes the general structures and dynamics according to which Webmind’s consciousness operates.

 

1            Introduction

 “Consciousness” is a very general term, and the issue of whether computers can be conscious or not is a big one.  This question will be briefly discussed here, but will not be our focus.  Rather, we’ll  deal with somewhat more practical issues.  Focused consciousness – short-term memory – attention – call it what you will.  How can it be made to work, within an AI system like Webmind, on a general level? 


Half of the article is a general review of consciousness, short-term memory, attention and so forth.   I review two approaches to understanding consciousness: the Global Workspace theory, and my own theory of consciousness as a coherentizing Perceptual-Cognitive-Active-Loop.  These are not presented as complete theories of the domain in question, but rather as heuristic guides to thinking about the issues involved.  The next part of the document discusses the embodiment of these processes in Webmind.  Specific Webmind processes useful for consciousness are described, and then it’s explained how these fit in with the two theoretical approaches to consciousness. 

 

For general background on the Webmind AI system, see the technical white papers on the Webmind Inc. website at http://www.webmind.com/productscore.html.  The first sections of this document are comprehensible without any Webmind background, but to understand the section describing the implementation of consciousness in Webmind, you will have to first read the basic Webmind docs on the Webmind Inc. site, otherwise you will encounter terms like “nodes”, “links” and “activation” without knowing precisely what they mean.

 

Before proceeding on to interesting things, a brief terminological note may be useful.  I’ll speak a great deal in this chapter about something called the AttentionalFocus.  This is closely related to what has been called the STM or Short Term Memory in much psychological literature.  However, it currently seems that the term STM seems to overly focus the recent-perceptual-history aspect.  Other alternative terms considered were,

·         WorkingMemory, which seems to imply that the rest of the memory isn’t working, and also implies adherence to a specific theory of working memory which I don’t entirely adhere to

·         GlobalWorkspace, which implies adherence to a particular theory of consciousness, which I like a lot, but isn’t quite identical with the one implicit in Webmind

·         Consciousness, which is OK, but doesn’t guide the mind in any useful concrete directions, and also carries the implication that only things in the AttentionalFocus are conscious at all, whereas I believe everything is conscious to a certain degree.

·         Context, which Cassio suggested, but which seems not quite right: in the EIL design, we’re already using the word “context” to refer to a category of particular situations, not to the focus of the mind at a given time. 

So for this essay it’s AttentionalFocus, or AF.  We’ll see if it sticks.

 

When speaking generally rather than implementationally, in the following discussion I’ll often use the term “consciousness” to mean about the same thing as AttentionalFocus.  Don’t make too much of this, philosophically.  Actually, I think that everything in the mind is conscious to some degree; and everything in the physical world is conscious to a yet lesser degree (as Peirce said, “Matter is but Mind hide-bound with habit.”).  Everything is conscious; but some things are more conscious than others!  But it’s an acceptable shorthand to use the word “consciousness” in lieu of “extra-intensely conscious stuff.”

 

 

2.  Can Computers Be Conscious?

 

It’s perhaps the thorniest philosophy-of-AI question of all: Can a computer program ever be conscious? 

 

This question lies at the heart of the philosophy of AI, but there is nothing near a consensus among researchers.  Some believe that consciousness is just another deterministic process like reasoning or perception, others believe that it’s a mysterious phenomenon unaccounted for by any kind of science, others that it’s a quantum phenomenon unique to the brain as a macroscopic quantum system, others that it’s just a folk psychology concept with no real fundamental meaning, etc. etc.

 

One interesting thing about the question of computer consciousness is that, judging from current practice, it is pretty much completely irrelevant to the practice of AI design and engineering.  It’s interesting to ask, why is this?  Of course, different philosophies of consciousness lead to different answers!  Those who believe that consciousness comes out of divine intervention in the human brain, or out of macroscopic quantum processes that are present in the human brain but not in computer programs, would say that the reason consciousness is irrelevant to AI engineering is that the programs engineered cannot be conscious.  On the other hand, those who believe that consciousness is basically a fiction, that it’s just a construct we deterministic human machines use to describe our deterministic actions, would say that consciousness is irrelevant to AI engineering because consciousness is nothing special, it’s just one more part of mind programming to be dealt with like the other parts.  And so forth.

 

I believe that consciousness, properly understood, does need to be considered in the course of AI design and engineering.  I think that the reason consciousness has been irrelevant to practical AI so far is simply that practical AI has not been concerned with actually building thinking machines, but only with making programs that manifest particular components of intelligence in isolation. 

I think it is crucial to analyze consciousness as a dual phenomenon.  What we call consciousness has two aspects  

Structured consciousness: There are certain structures associated with consciousness, which are deterministic, cognitive structures.  These structures, as they manifest themselves in Webmind and in the human brain, will be discussed in the other sections of this document

Raw consciousness:  The “raw feel” of consciousness, which I will discuss briefly, here.

What is often called the “hard problem” of consciousness is how to connect the two.  Although few others may agree with me on this point, at this point, I believe I know how to do this.  I analyze raw consciousness as “pure, unstructured experience,” as Peircean First, which manifests itself in the realm of Third as randomness.  Structured consciousness on the other hand is a process that coherentizes mental entities, makes them more “rigidly bounded,” less likely to diffuse into the rest of the mind.  The two interact in this way: structured consciousness is the process in which the randomness of raw consciousness has the biggest effect.  Structured consciousness amplifies little bits of raw consciousness, as are present in everything, into a major causal force.

Obviously, my solution to the “hard problem” is by no means universally accepted in the cognitive science or philosophy or AI communities!  There is no widely accepted view; the study of consciousness is a chaos.  I anticipate that many readers will accept my theory of structured consciousness but reject my theory of raw consciousness.  This is fine: everyone is welcome to their own errors!  The two are separable, although a complete understanding of mind must include both aspects of consciousness.

 

Raw consciousness is a tricky thing to deal with because it is really outside the realm of science.  Whether my design for structured consciousness is useful – this can be tested empirically.  Whether my theory of raw consciousness is correct cannot.  Ultimately, the test of whether Webmind is conscious is a subjective test.  If it’s smart enough and interacts with humans in a rich enough way, then humans will believe it’s conscious, and will accommodate this belief within their own theories of consciousness.

Raw Consciousness

Raw consciousness is really the crux of it all.  Consciousness is not just focus – it’s also, raw, primal experience, the sense of being and becoming.  It’s hard to put this aspect of consciousness in words, but you’re a conscious organism, so you know what I mean.

 

Of course, this is the most controversial aspect of consciousness, and I want to leave room for the reader to agree with my analysis of structural consciousness but not with my analysis of the nature of conscious experience.  However, I would feel it dishonest to present an overall design for a thinking machine without saying how I think conscious experience fits into the picture.

My theory of raw consciousness is very simple, and is well captured in two axioms:

1) raw consciousness is absolute freedom, pure spontaneity and lawlessness

2) pure spontaneity, when considered in terms of its effects on structured systems, manifests itself as randomness.

Basically, this is a more modern way of stating that raw consciousness is Peircean First, and that First, when it appears in the world of Third, takes on the guise of chance.

In the past I have referred to this theory of consciousness as the “randomness theory,” so I will continue to use this name here. However, this phrase should not be over- interpreted: to declare that “raw consciousness is randomness” would be an overstatement, albeit a suggestive one.  Raw consciousness is consciousness only; it is too elemental to be thoroughly expressed in terms of anything else. The correct statement is that, when one seeks to study raw consciousness by the scientific method, i.e. by detecting patterns in systems, one finds that raw consciousness is associated with  randomness.

Randomness is the incarnation which raw consciousness assumes in the realm of regularities.

 

The randomness theory is not entirely new. However, it has been hinted at much more often than it has been openly stated. For instance, Roger Penrose's anti- AI tract The Emperor's New Mind , argues that consciousness and creativity cannot be modeled with computers, but only with uncomputable (i.e. random) numbers. And the consciousness- based approach to quantum measurement, as developed e.g. by Eugene Wigner  and Amit Goswami , implies that the only role of consciousness is to make a random choice. But both of these trains of thought stop short of explicitly associating consciousness with randomness.   My specific take on this approach is presented in the article “Chance and Consciousness”, available at http://goertzel.org/dynapsyc/1995/GOERTZEL.html. 

 

In artificial intelligence, the randomness theory provides a missing link: the link between computation and creativity.  From its inception, AI has been pursued by the nagging question:  “If we build a machine that 'thinks,' where does the awareness come in?  How can it be truly creative if it is just following rules, if there is no free will, no reflection, no consciousness of what it is doing?”  The randomness theory declares that the consciousness of a machine may be associated with the randomness in its dynamics. Thus, machines made of silicon and steel are in principle just as capable of consciousness as we machines made of biomolecules.

 

Less germanely in the present context, the randomness theory also provides a natural explanation for the mysterious appearance of consciousness in the quantum theory of measurement. The problem with the quantum theory of consciousness, as usually formulated, is its lack of any connection with the biology or phenomenology of consciousness. The randomness theory fills in the missing link: it is randomness which collapses the quantum wave- packet, and this same randomness which drives the neural processes underlying the everyday consciousness of objects.  This aspect of the theory is addressed in (Goertas well. 

 

On the face of it, the very ubiquity of randomness might seem to pose a problem for the randomness theory of consciousness. If every particle has some randomness in it, then everything is conscious, and what can be said about the special properties of human consciousness? What this objection ignores, however, is the distinction between “consciousness in itself “and “consciousness of something.” I will argue that the sustained consciousness of an object which we humans experience can only be attained by very special mechanisms.  These mechanisms are what I call the structural consciousness of humans.

In this view, altered states of consciousness, such as meditation and creative inspiration, can be seen as resulting from subverting the circuitry evolved for object- consciousness, and using it for other purposes. Furthermore, ordinary linguistic thought can be seen as a less extreme version of the same phenomenon. This leads to a reinterpretation of Dennett's  hypothesis that consciousness is a meme (a socially transmitted, self- perpetuating behavior/thought pattern): it is not raw consciousness itself which is memetic, but rather the refocusing on abstract patterns of circuitry of circuits intended for simple object- consciousness.  This is the subtlety of structural consciousness, which should not be confused with the absolute simplicity and unsubtlety of raw consciousness.

 

An example of the kind of “special mechanism” required for object- consciousness is the “perceptual/cognitive/active loop” (PCAL) to be described below, which gives one method by which a biological or digital intelligence can amplify simple randomness into sustained randomness regarding some specific focus of attention.  The PCAL ties in with many current ideas in neuroscience, most notably Edelman's Neural Darwinism and Taylor's neural network theory of mind.  One can also extend this analysis to deal with the detailed comparison of various states of consciousness, for example the relation between ordinary waking consciousness and enlightened consciousness as described in various mystical traiditons, but in the present context, this would take us rather too far afield!

The randomness theory of consciousness may seem to be a mere sophistry, an attempt to probe deep waters with shallow- water instruments. In short, it may appear “too simple to be true.”  But my conviction is that raw consciousness really is simple. It is we scientists and philosophers who have made it seem complex, by confusing it with those phenomena that it tends to be associated with, i.e. with structural consciousness. The randomness theory allows raw consciousness its elemental simplicity, and places the complexity where it belongs.  This clarification is necessary as a precursor for the orderly design and engineering of structural consciousness, which is required as part of the Webmind design.

 

 

 

3. Global Workspace Theory

Among the various psychological theories of STM, working memory, consciousness, etc., one that appeals to me to a fair degree is Global Workspace theory.   This theory was developed by Bernard Baars (see In the Theater of Consciousness, 1997; A Cognitive Theory of Consciousness, 1988), and is presented in a very partial way on the Web in a couple papers at http://www.phil.vt.edu/ASSC/esem2.html  (see both “Metaphors of Consciousness and Attention in the Brain,”and “A Thoroughly Empirical Approach to Consciousness: Contrastive Analysis”). 

An interesting AI system that explicitly embodies Global Workspace theory is described in the paper “Consciousness” and Conceptual Learning In A Socially Situated Agent, Myles Bogner, Uma Ramamurthy, and Stan Franklin,: http://www.msci.memphis.edu/~cmattie/Consciousness_and_Conceptual_Learning_in_a_Socially_Situated_Agent/Consciousness_and_Conceptual_Learning_In_A_Socially_Situated_Agent.html.

This of course is not a Webmind-like system; it’s much more limited; but it is a serious attempt to build a simple mind and in this sense is better than nearly all work in the AI field.

Global workspace theory has many flaws.  It’s not as original as it sometimes pretends to be: nearly all of the ideas it presents were present in previous theoretical approaches, under slightly different names.  Also, it gets vague just where you’d like it to get precise.  Nonetheless, however, I’ve found it to be a valuable conceptual guide to an area of AI engineering that otherwise can be incredibly confusing.  Quite likely, it is more valuable as a guide to AI design than as a guide to cognitive psychology research.

 

The global workspace theory views the mind as consisting of a large population of small, specialized processes – a society of agents.  These agents organize themselves into coalitions, and coalitions that are relevant to contextually novel phenomena, or contextually important goals, are pulled into the global workspace (which is identified with consciousness).  This workspace broadcasts the message of the coalition to all the unconscious agents, and recruits other agents into consciousness.   Goal contexts, perceptual contexts, conceptual contexts and cultural contexts play a role in determining which coalitions are relevant – these form the unconscious “background” of the conscious global workspace.   New perceptions are often, but not necessarily, pushed into the workspace.   Some of the agents in the global workspace are concerned with action selection, i.e. with controlling and passing parameters to a population of possible actions.   The contents of the workspace at any given time have a certain cohesiveness and interdependency, the so-called “unity of consciousness.”

3.1 Historical Precedents and Philosophical Debates

All this is not that different from the classical cog-sci theories of consciousness put forth by Newell, Simon, Anderson, and so forth.  It differs mainly in its view of the mind as an agent system, and the ensuing view of the coupled LTM/global-workspace system as a complex dynamical system.   From my point of view, it’s a nice framework, but doesn’t tell you anywhere near enough to build a useful global workspace.  The crux of consciousness is in the actions of the particular agents in the workspace.

 

Many cognitive scientists who like distributed processing and self-organization do not like this kind of approach to consciousness.  Daniel Dennett, for example, ridicules the “Cartesian Theater,” equating it with the little homunculus in the head who carries out conscious processing.  But, I like both distributed processing and self-organization, and the global workplace.

 

There are two issues here:

·         localization of consciousness

·         the nature of the processes of consciousness.  

It seems clear from neuroanatomical data that there is some localization in human attentional focus, but that it’s also distributed over large regions of the brain – just not all of the brain.  Attentional focus and the unfocused unconscious are distinct but interwoven.  As we’ll see in the final section, this is quite consistent with Webmind’s implementation of AttentionalFocus.   

As for processes of consciousness, I’ll argue in the final section that the global workspace doesn’t have to be a homunculus carrying out processes totally different from those going on in the rest of the mind.   Rather, the global workspace involves pretty much the same processes as the rest of the mind, but mixed in different proportions, and thus achieving different effects. 

 

In short, the idea of a global workspace, a separate part of the mind/brain devoted to attentional focus or consciousness, has been abused by theorists who’ve claimed that

·         this is a totally separate entity from the rest of the mind

·         this part of the mind carries out special rational, logical processes, different from what goes on in the rest of the mind

Newell and Simon’s global workspace, for example, came along with a lot of overly restrictive, rule and logic based ideas about how the global workspace and the rest of the mind functions.  


But, one shouldn’t confuse the valid idea of a global workspace with the incorrect hypotheses that some theorists have made about how the global workspace works; or with the incorrect assumptions that some theorists have made about whether the global workspace is strictly localized or partially distributed.  Don’t throw the baby out with the bathwater.

3.2 Taxonomy of Conscious Processes

What kind of work goes on in the global workspace?  This is something that cognitive psychology can tell us a lot about.  The following discussion is based largely on Baars’ paper “A Purely Empirical Approach to Consciousness.”  Following Baars, I’ll review several kinds of work that happens in the human mind’s conscious focus:

 

·         Perception

·         Imagery and inner speech

·         Input selection

·         Learning

·         Memory

·         Spontaneous problem solving

This is a kind of “catalogue of conscious functions.”   I’ll go through it in moderate detail, and then indicate what I think it leaves out.

Perception

In the global workspace, inputs to the system are given representations

 

What is a representation?  Not some logical formula explicitly written on some internal mental whiteboard, nor a direct image of what’s been perceived.  Rather, a representation of X is anything that allows you to efficiently evaluate f(X) for many interesting functions X.  It’s a compact representation of many properties of X.

 

Not all perceptions enter consciousness.  The hum of a refrigerator, for example, is quickly passed into the realm of the not-heard, after a few minutes of exposure.  This means that a judgment of perceptual novelty is made, by the perceptual part of the mind, before a percept is passed into consciousness.  Only things that pass the perceptual novelty test get to go into consciousness and get full perceptual representations.  Things that fail the perceptual novelty test may still have enough importance to get remembered, but they won’t get processed as thoroughly.  For example, stories that are heard unconsciously are entered into the long-term memory but aren’t parsed and semantically processed very well.

 

Also, perceptions are constrained based on contextual expectations.   In other words, contextual expectations act not only on consciousness, but also at the lower perceptual level, modifying perceptions before they get into the fluidly manipulable domain of the global workspace.  Imagery and Inner speech

 

In humans, consciousness often comes along with

·         Perceptual images retrieved from memory

·         Inner speech, i.e. language produced internally but not uttered

·         Items in “working memory”, being actively remembered for use by current conscious processes

·         Non-perceptual items retrieved from LTM may be cast in the form of quasi-perceptual images, thus allowing the mind to leverage its powerful percept analysis system to help it analyze concepts as well.

Input Selection

Human consciousness deals with

·         Perception of streams of stimuli

·         Events, outside the scope of attention, interrupting the input stream

·         Voluntary direction of attention

·         Attention drawn to new or changing stimuli

The voluntary direction of attention is a shift of attention from one stream of stimuli to another, that involves the self, i.e. involves explicit knowledge of the fact that a shift of attention is being undertaken by the system, and the recording of this knowledge, perhaps temporarily.

Learning

Consciousness deals with some kinds of learning:

·         Learning novel actions (learning how to ride a bike)

·         Learning de-automatized actions (what to do when the bike breaks down)

·         Learning stimuli and relations among stimuli (declarative rather than procedural learning)

Memory

We have in consciousness several kinds of memories, emerging into consciousness from LTM:

·         Recalled autobiographic (episodic) memories

·         Recalled declarative memories (memories of relations)

·         Recalled sensory memories (memories of perceived objects)

·         Recalled active memories (memories of specific actions undertaken)

Spontaneous Problem Solving

In the process of problem-solving, there’s a typical process whereby the initial stage of problem formulation is conscious, and the final stage of problem solution is conscious, but the long intermediate stage of problem resolution is unconscious.

 

There is Baars’ catalogue of conscious functions.  He compares each of these to similar unconscious functions, and says a bit about the psychological and neural differences between conscious functions and similar unconscious functions. 

 

There are some peculiar lacks here: for instance, reasoning is not emphasized, when this is really a key aspect to human consciousness.  We reason much better, much more rigorously, on the conscious level than on the unconscious level.  I think this is because, on the conscious level, we can be much more careful about what evidence we draw into the reasoning process.  We don’t have to take into account everything we know and feel, which fuzzifies our conclusions.  Instead, we pare down all our concepts to a bare minimum, so they’ll fit into a small space, and with these pared-down concepts a single step of reasoning can be done with a greater certitude, which means that longer chains of reasoning can be meaningfully constructed.

4. Consciousness as Coherentization

Global Workspace Theory is nice.  But it doesn’t explain everything about consciousness – not by a long shot.  (Forget about the subjective nature of experience – I’m just talking about structure and dynamics.)  It seems to leap from a general schematic view of consciousness and its role to a taxonomy of conscious processes, without giving more than lip service to the question: What is the essence of the conscious dynamic?  What is it that consciousness does, in essence?  Does it really just do a grab bag of things, falling into a taxonomy?  Or is there something simpler at the core?


In From Complexity to Creativity, I outline a somewhat different approach to consciousness, also based on a combination of psychological and neurological arguments.  My theory of consciousness there is very simple:

What consciousness does is to create coherent wholes.

This point of view is presented online in Section 8.2 of http://goertzel.org/ben/c2c7.html (the final sections of that chapter contain some ideas about consciousness and division algebras that I’m not sure I fully agree with anymore, by the way).

 

Like the Global Workspace theory, this theory of consciousness also has its flaws.  It is a good conceptual guide, but it’s hard to make it precise in empirically useful ways.  It is presented here as an heuristic guide to thinking about Webmind AI, rather than as a purported solid scientific theory of consciousness.

 

The basic idea of “consciousness as coherentization” is this.  Disparate percepts come in from the sense organs, and consciousness groups them into whole scenes, situations, ideas.  Fragmented and vague memories and notions and intuitions bubble up from the LTM, and consciousness makes them crisp and solid and well-defined.  Then they can go back into the LTM and diffuse again slowly, and structure the diffusive intelligence processes of the LTM while they do so.

 

Another way to say this is: Consciousness draws boundaries.  It takes a fuzzy field of information and draws a boundary around part of it and says : This is a whole.  Consciousness draws boundaries by being bounded.  Whatever items are bound together in a moment of consciousness, are bound together in the memory for a long time. 

 

The idea of consciousness as a coherent coherentizer is present in the global workspace theory via the concept of the “unity of consciousness.”  But what does this unity consist of?  The fact that consciousness contains a coalition of agents – a coalition being a collection of agents that rely on each other.  The detailed logic of coalition-ness needs more attention than cognitive science gives it.  In systems theory terms, coherence has a lot to do with autopoiesis, the self-producing nature of systems.  Each element of a coalition in the mind plays a necessary role, so that if it’s removed, something similar to it will be put in its place by the other members of the coalition.  Consciousness then takes a system of elements and removes, inserts and modifies until it finds something with a natural coherence.  It takes coalitions and makes them tighter.

 

Of course, it’s not hard to see how the taxonomy of conscious processes described above can be understood as a coherentization process.

·         Perception: in consciousness, meaningless percepts are made into meaningful wholes, via the unification of perceptual forms with conceptual forms drawn from LTM into new wholes

·         Imagery and inner speech: This is a matter of giving a concrete and coherent form to vague shifting ideas from the unconscious

·         Input selection: A fuzzy attentional field in the unconscious, getting bits here and there from many different streams of external input, is replaced by a coherent focus on a single stream of data.  A boundary is drawn between what’s in that stream and what isn’t.

·         Learning: This is the recognition of patterns (in the outside world, the inner world, or emergent between the two).  A particular pattern is noticed and held in consciousness which records it in the memory for future reference.

·         Memory:  Memories live in the unconscious in unclear and shifting form; consciousness pulls them out and crystallizes them into their essences, then puts them back, with a more limited richness but a greater clarity

·         Spontaneous problem solving:  The crystallization of an answer is a complex and interesting aspect of coherentization – vague intuitions about how something should work suddenly come to stand in a clear relation to the problem posed, with a path from problem to solution; and this sudden clarity usually occurs in consciousness.   One senses consciously that the relationships needed to solve the problem have been found, but they’re still a little muddled and mixed up with other things; by grabbing them into consciousness the mind extracts the needed truth from the morass, making a coherent whole solution rather than a solution that blends into all the other relations uncovered along the way

4.1 The Perceptual-Cognitive-Active Loop

I also propose, in Complexity to Creativity (section 8.3 of the above-referenced file), that consciousness is a kind of cyclic process, which loops information from perception to cognition, to perception, to cognition, and so forth (in the process continually creating new information to be cycled around).  The cognitive end of the loop, I suggest, serves largely as a tester and controller. The perceptual end does some primitive grouping procedures, and then passes its results along to the cognitive end, asking for approval: “Did I group too little, or enough?”  The cognitive end seeks to integrate the results of the perceptual end with its knowledge and memory, and on this basis gives an answer. In short, it acts on the percepts, by trying to do things with them, by trying to use them to interface with memory and motor systems.

 

The cognitive end of the loop gives the answer “too little coherentization” if the proposed grouping is simply torn apart by contact with memory – if different parts of the supposedly coherent percept connect with totally different remembered percepts, whereas the whole connects significantly with nothing. And when the perceptual end receives the answer “too little,” it goes ahead and tries to group things together even more, to make things even more coherent. Then it presents its work to the cognitive end again. Eventually the cognitive end of the loop answers: “Enough!” Then one has an entity which is sufficiently coherent to withstand the onslaughts of memory. 

 

Of course, the cognitive end may also assist in the coherentizing process. Perhaps it proposes ideas for interpretations of the whole, which the perceptual end then approves or disapproves based on its access to more primitive features. This function is not in any way contradictory to the idea of the cognitive end as a tester and controller; indeed the two directions of control fit in quite nicely together.


Note that a maximally coherent percept/concept is not desirable, because thought, perception and memory require that ideas possess some degree of flexibility. The individual features of a percept should be detectable to some degree, otherwise how could the percept be related to other similar ones? The trick is to stop the coherence-making process just in time.


This Perceptual-Cognitive-Active Loop theory seems to be well supported by what’s known about the neuroanatomy of consciousness.  How does it fit in with the Global Workspace theory?  Not badly, actually.  The two approaches are orthogonal.  The Global Workspace theory is all about what gets into consciousness, and what types of functions consciousness carries out.  It doesn’t try to get at the essence of what consciousness does, the core logic of consciousness.  It’s quite compatible with the notion that the key dynamic of consciousness is a PCA looping action    As noted above, most of the functions of consciousness as described in the Global Workspace theory can be seen in terms of coherentization, at least in part; and, the PCA loop is a way of doing coherentization.

5. Conscious Operations in Webmind

Now let’s get down to business.  How does Webmind implement its global workspace, its Perceptual-Cognitive-Active loop, its AttentionalFocus, its spotlight of focused consciousness?  Remember, in order to read this section with understanding, you’ll need to first review the Webmind docs at http://www.webmind.com/productscore.html.  

 

The distributed/localized question has already been resolved in the Webmind design.  We have the hypothesis that the AF can be a NodeGroup or a collection of NodeGroups.  This solves the “distributed consciousness” problem quite nicely.  Attentional focus is localized to these nodegroups, but these nodegroups may be distributed across the lobes of the psynet.

 

The key question about AF that remains is, then: What happens inside it?  What are the conscious processes of Webmind?  In the overall Webmind design, we already have a well-worked-out theory of what goes on in the LTM, i.e. the rest of the psynet.  So, terms of psynet processes, we may cast our question as follows: Which of the following three options holds?

1.       There are AF operations that don’t occur in LTM

2.       Items in AF do familiar LTM operations, but with a different balance between them

3.       Items in AF do the same operations as LTM items, and with the same balance –they just spend more attention on each operation

 

I believe that 2 is mostly the case, and what I’m going to do here is to describe the “different balance” that is implied here.  

 

I also think that 1 is partly true, but only on a very technical level.  The operations done in AF are of the same character as the ones done in LTM, but have different requirements, and this leads to some technical differences in some AF operations.  For example, in AF we need fast asynchronous halo formation, and we need wanderers with restricted wandering scope.  These are minor modifications of LTM processes.

 

Without further ado, then, here’s what I think happens in the AttentionalFocus.  I’ll start by describing these processes in psynet terms.  Then, in the following section, I’ll explain why these psynet processes fulfill the requirements for consciousness as outlined in both the Global Workspace theory and my own theory of consciousness.  All this is not done in the most rigorous possible way, because neither the global workspace theory nor my own theory of consciousness have been defined in a truly rigorous way.  The point is thus not to prove that the proposed approach to consciousness is correct, but rather to make a strong argument for plausibility.  Since we are building a real AI system, the proof will be in the system’s behavior when it’s complete.

 

Rapid association formation.   We need results fast, not waiting for slow parts of the mind.  This means that associations formed in the AttentionalFocus may reflect the current bias of the mind, rather than reflecting everything in the mind regardless of its importance.

 

New perceptual items come into the AF and start off their lives by spurting a lot of activation.  Maybe this happens as follows: Each new item, for the first moments it’s in the system, gets to send activation out.  This will boost its importance automatically, and will cause things related to it to become important as well.

 

Inference is done on nodes in AF, and on logical and other combinations of nodes in the AF.   This means two things:

·         fusion of AF nodes by logical and other operators is frequent

·         these fused nodes then do a lot of inference, just like the other nodes in AF

Links belonging to nodes in AF are extracted and turned into ANDNodes and ORNodes with high frequency.

 

Links between various things in AF are actively formed.  For this, one wants wanderers whose scope is restricted to the AF.  Of course, nodes in the AF also send out normal wanderers, that wander throughout the LTM, building links to other things that will then possibly be drawn into AF, and gathering new activation.

 

Categories of nodes in AF are formed on an experimental basis, and then deleted rapidly if they don’t form interesting links.  The threshold for formation of categories is much lower here than in LTM: two things may provisionally be grouped together even if they only have one association in common, for example.

 

The combination of categorization with other conscious processes creates a lot of interesting emergent phenomena.  For example, suppose one is focusing on the relationship between A and B.  One may study this relationship from many different perspectives, in the following way: Create a category C, containing A and B.   Then, OR C with different nodes, D, E, F, etc.   One is thus creating a set of different “perspectives” on the relation between A and B.    This is a very common combination of operations in my own mind: I hold a collections of items there and sort of “turn them around” in various ways, looking at them in different views, until I find a view of the relationship between the items that is useful.

 

Coherent event-entity formation is done extensively in AF, as well as in the Perception NodeGroup.  This process will be described in another document: it groups nodes into wholes based on a combination of spatiotemporal contiguity and property-based clustering.

 

Goals cause schema to be learned through reasoning and evolution.  Specifically, goals that are very important and hence in AF cause schema learning processes to start.  When the schema are learned successfully they become important and pop up into AF themselves.

 

Perceptive, active and cognitive schema are executed and adapted

 

Inhibitory activation spreading prevails.  This seems crucial to coherentization and the unity of consciousness.  For consciousness to play the role of taking coalitions of nodes and making them crisper, more set apart from the rest of the net, it must actively draw a boundary around the given coalition, by squelching links between elements in the coalition and elements out of the coalition.  Of course, a certain amount of coherentization is done without this, just by virtue of the fact that nodes in AF are building more links to other things inside AF than to things outside.  This in itself builds the unity of consciousness.  But one suspects that inhibitory activation spreading accelerates this process, thus maintaining the continuity of the stream of consciousness (by preventing distraction to outside things that are closely related but not quite in the current focus).

 

Of course, this review leaves some important things out.  But the general idea should be clear.  AF contains a mix of expensive node processes, which allows it to construct a dynamically shifting “deep and thorough view” of the things that it contains.  Each thing in AF may be cast in many different forms, until a form is found that resonates with information in LTM and allows useful or interesting new conclusions to be drawn.

 

AF has limited capacity because it uses so much CPU time on each element inside it, and on each combination of elements inside it (since so many of its processes are based on interaction between elements of AF).   Because of the combinational aspect, the processing constraints of AF increase exponentially with the number of elements in AF.  If we have N elements in AF, the number of categories formable as combinations of elements in AF is exponential in N.

 

Roughly speaking we may say that LTM views all aspects of the nodes in it, at a low level of intensity.  Putting a node in STM, allows the mind to select particular aspects of it and view them with high intensity, using processes that explore every possible combination of the entities involved in that aspect of the node, and select the  most useful combinations.

The optimal combination of the various AF processes is going to have to be adapted based on experience.  I doubt very much that it’s going to be anything close to the same as the optimal combination of processes in the LTM.   The basic processes are the same, but the nonlinearity of the dynamics leads to different emergent phenomena among the basic processes when they are done at high frequency among a small group of items.

6. Psychological Meaning of Webmind’s Conscious Processes

How do these Webmind AF processes map into the structures and dynamics of consciousness described in the preceding, theoretical sections?

 

Generally speaking, we see that AttentionalFocus in Webmind implements the global workspace concept, but adds a huge amount of meat to it, by providing a collection of specific dynamics – standard Webmind dynamics occurring in AF in a particular mix different from the mix seen in LTM, a mix that is guided by the rapid-processing needs and limited capacity of AF, a mix that leads to intelligent consciousness.

 

The idea that items get into the AF if they’re contextually important is implicit in the Webmind definition of AF as the most important nodes in the system.  The idea that it’s coalitions of related nodes that get pushed into the AF is a natural consequence of Webmind dynamics: because of the clustery nature of Webmind links in real cases, important nodes will tend to come in clusters.   The idea that nodes in AF will send messages to nodes outside of AF is also implicit in activation spreading.  Generally, the Webmind architecture is consistent with the “mind as a population of agents” viewpoint underlying the Global Workspace theory.

 

Turning to my own theory of consciousness… Coherentization has a very simple meaning in Webmind: Node formation.   A node represents a coherent whole.  AF creates a lot of nodes.  Furthermore, when a conscious moment is saved into memory as an ExperienceNode, then the fact that consciousness is bounded is being used to create a boundary, just as is theoretically required.

Is a perceptual-cognitive-active loop being used to create nodes?  The question here is: Does activation habitually flow from new perceptual items, to abstract concepts, to schema that do things with the concepts (generally toward current goals), then back to perceptual items (perhaps perceptions of the outcome of the schema just enacted), etc.?   It’s clear that this is one kind of activation flow pattern in Webmind.  Webmind doesn’t force this kind of flow pattern on consciousness, but it’s quite plausible that it may emerge from Webmind’s dynamics. 

The cognitive end of the loop is tester/controller, in the sense that once new categories are formed from perception, they’ll dissipate if they don’t build enough links.  If categories are given  much more attention than the components they’re made of, then you’ve have over-coherentization, in which the whole is more real to the mind than the parts; but this is a matter of getting the parameter balance right between concrete nodes in AF and category nodes in AF.

 

Now let’s go through the consciousness catalogue and see how the Webmind processes match up:

 

Perception

Yes, Webmind represents percepts as nodes and links

Imagery and Inner speech

AF will contain SentenceNodes and so forth, produced by various other nodes.  Some of them will be part of schema that say something to a user, and some won’t.

Perceptual images can pop up from LTM just like anything else.

Working memory is just a subset of AF, a set of nodes that are kept in AF because they’re getting activation from a currently active goal that wants them there.  A separate WorkingMemory data structure is not necessary.

Input Selection

Schema in AF may direct the system’s sensory receptors to one place or another, thus gathering different perceptual streams for the system.

If there is too much perception coming in, as compared to the size limitations of the AF Virtual NodeGroup, then some perceptions will wind up going into LTM directly.  These will be the least important ones, assuming there’s some link-building and activation spreading going on even at the perceptual level.

The voluntary direction of attention is a shift of attention from one stream of stimuli to another, that involves the SelfNode, i.e. involves explicit knowledge of the fact that a shift of attention is being undertaken by the system, in the form of creation of new links describing the events occurring in consciousness.

Learning

Learning is carried out by the reasoning system and by evolutionary schema learning, by halo formation, etc. 

 

Memory

Memories, whether episodic, declarative or sensory, may bubble up from the LTM into the STM.  

Spontaneous Problem Solving

This is reflected in several ways.

 

For example, we may have a process whereby

·         A conscious reasoning process (perhaps purely extensional) can’t proceed

·         The unconscious does a lot of reasoning on related areas (probably mixed extensional/intensional)

·         Eventually, one of these results comes back into consciousness and solves the problem (turns out to be valid extensionally)

Here, the difference between the conscious and unconscious parts of the reasoning is the focus on extension.

 

Or, for procedural learning, we have a process whereby goals are in AF, and these goals trigger schema learning in LTM, which then produces successful schema that are important because of their success, and get pushed back into LTM.

 

7. Conclusion

In conclusion, it seems that AttentionalFocus in Webmind, as described here and in other related, more technical documents, can fulfill all the requirements of a conscious system as described in the Global Workspace or coherentization theories of consciousness.  It also does a lot of other useful things not explicitly mentioned in these theories, but clearly crucial to consciousness as we experience it.   One could also look at other theories of consciousness and test Webmind against them; this might well be an interesting exercise.  Even more interesting, though, is to implement Webmind’s consciousness along the lines described here and play with it and see what happens!  We’re working on it….