DynaPsych Table of Contents


 

A General Theory of Emotion

in Humans and Other Intelligences

 

Ben Goertzel

February 20, 2004


 

Introduction

 

Emotions play an extremely important role in human mental life – but it is not, on the face of it, clear whether this needs to be the case for AI’s.  Much of human emotional life is distinctly human in nature, clearly not portable to systems without humanlike bodies.  Furthermore, many problems in human psychology and society are caused by emotions run amok in various ways – so in respects it might seem desirable to create emotion-free AI’s.

 

On the other hand, it might also be that emotions represent a critical part of mental process, and human emotions are merely one particular manifestation of a more general phenomenon – which must be manifested in some way in any mind.  This is the perspective I’ll advocate here.  I think the basic phenomenon of emotion is something that any mind must experience – and I will make a specific hypothesis regarding the grounding of this phenomenon in the dynamics of intelligent systems.  Human emotions are then considered as an elaboration of the general “emotion” phenomenon in a peculiarly human way.  There are a few universal emotions – including happiness, sadness and spiritual joy – which any intelligent system with finite computational resources is bound to experience, to an extent.  And then there are many species-specific emotions, which in the case of humans include rage, joy and lust and other related feelings.

 

 

What Is Emotion?

 

Emotions have two aspects, which may be called hot versus cold (Mandler, 1975), or “conscious-experiential-flavor” versus “neural/cognitive structure-and-dynamics” – or, using my preferred vocabulary, qualia versus pattern.  From some conceptual perspectives, the relation between the qualia aspects and the pattern aspect is problematic.  I follow a philosophy in which qualia and patterns are aligned – each pattern comes along with a quale, which is more or less intense according to the “prominence” of the pattern (the degree of simplification that the pattern provides in its ground) (see Goertzel, 2004a).  In this approach, the qualia and pattern aspects of emotion may be dealt with in a unified way.

 

So what is the general pattern of “emotion”?  Dictionary definitions are not usually reliable for philosophical or scientific purposes, but in this case, a definition from dictionary.com is actually a reasonable place to start:

 

Emotion

A mental state that arises spontaneously rather than through conscious effort and is often accompanied by physiological changes; a feeling: the emotions of joy, sorrow, reverence, hate, and love.

 

One problem with this definition is its use of the mixed-up word “conscious.”  I will replace this with the term “free will” which, in a recent essay, I have sought to define in a general, physiologically and computationally grounded way.  Thus I arrive at a definition of an emotion as

 

Emotion

A mental state that does not arise through free will, and that and is often accompanied by physiological changes

 

“Free will,” as I understand it (Goertzel, 2004b), is a complex sort of quale, consisting primarily of

 

 

This generally goes along with

 

 

Sometimes, though, these two aspects are uncorrelated, giving the feeling of “I don’t know why I decided to do that.”

 

Mental states that do not arise through free will, are mental states that:

 

 

This often goes along with

 

 

But sometimes, these two aspects are uncorrelated, and one can rationally reconstruct why some spontaneous mind-state occurred, in a reasonably confident way.

 

What causes mental states to register in the brain’s virtual multiverse model in a delayed way?   One cause might be that these mental states are ambiguous and difficult to understand, so that it takes the virtual multiverse modeler a long time to understand what’s going on – to figure out which branch has actually been traversed.  Another might be that the state is correlated with physical processes that inhibit the virtual multiverse modeler’s normal “branch collapsing” activity – and that the branch-collapsing only proceeds a little later, once this inhibitory effect has diminished.

 

In the case of human emotions, the “accompaniment with physiological changes” mentioned in the above definition of emotion seems to be a key point.  It seems that there’s a time lag between certain kinds of broadly-based physiological sensations in the human brain/body, and registration of these sensations in the human brain’s virtual multiverse modelers.   

 

There are many reasons why this delay might occur.  The phenomenon may be a combination of different factors, for instance:

 

 

And so, in regard to emotions, a flexibly superposed subjective multiverse is maintained, rather than a continually collapsed subjective universe that defines a single crisp path through the virtual multiverse.  This helps explain both the beautiful and the confusing nature of emotions.

 

Regarding the second hypothesized factor, the obvious question is: Why do the broadly-based partly-physical sensations we humans call “emotions” have this strange relationship with time?  This may be largely because they consist of various types of data coming in from various parts of the brain and body, with various time lags.  A piece of sensation coming in from one part of the brain or body right now may have a different meaning depending on information about what’s going on in some other part of the brain or body – but this information may not be there yet.  When information gathering and integration regarding a “distributed action pattern” requires this kind of temporally-defused activity, then the tight connection between action and virtual-multiverse-model collapse that exists in other contexts doesn’t exist anymore.  Ergo, no feeling of “free will” – rather, a feeling of things happening in oneself, without a correlated “decision process.”  A strong emotion can make one feel “outside of time.”

 

Furthermore, while it’s easy to make a high-level story as to what made one sad or happy or feel some other emotion, it’s not at all easy to make up a story regarding the details of an emotional experience.  Usually, one just doesn’t know – because so much of the details of the emotional experience have to do with physiological dynamics that are opaque to the analytical brain (unless the analytical brain makes a huge, massively-effort-consuming push to become aware of these normally unconscious processes).

 

So we have arrived at a more specific, technical, “mechanistic” and hypothetical definition of emotion:

 

Emotion

A mental state marked by prominent internal temporal patterns that

 Such patterns will often, though not always, involve complex and broad physiological changes.

 

 

What does this mean regarding the potential experiencing of emotions by nonhuman minds?  Clearly, in any case where there’s diverse and ambiguous information coming in from various hard-to-control parts of an intelligent system, one is not going to have the “usual” situation of virtual multiverse collapse.  One is going to have a sensation of major patterns occurring inside one’s own mind, but without any “free will” type “decision” process going along with it.  This is, in the most abstract sense, “emotion.”  Emotions in this sense need not be correlated with physiological patterns, but it makes sense that they often will be.

 

Emotional Typology

 

Now we turn to the question of emotional typology.  Humans experience a vast range of emotions.  Will other types of minds experience completely different emotion-types, or is there some kind of general system-theoretic typology of emotions?

 

I think there will be a small amount of emotional commonality among various minds – certain very simple emotions have an abstract, mind-architecture-independent meaning.   But the vast majority of human emotional nuance is tied to human physical embodiment and evolutionary history, and would not be emulated in an AI mind or a radically different biological species.

 

Any system that has a set of goals that remain constant over a period of time, can experience an emotion I call “abstract happiness,” which is the emotion induced by an increasing amount of goal-achievement.   On the other hand, it can also experience “abstract sadness,” i.e. the emotion induced by a decreasing amount of goal-achievement.   These emotions can become quite complex because organisms can have multiple goals, and at any one moment some may experience increasing achievement while others experience decreasing achievement.

 

Different flavors of happiness are then associated with different sorts of goals.  For instance, there is the goal of increasing the amount of harmony (defined as, say, the amount of similarity with and the amount of emergent pattern produced together with) between the system and the rest of the universe.  What I call “spiritual joy” is the feeling of increase in inner/outer harmony – i.e., the feeling of increasing achievement of the “inner/outer harmony” goal.

 

But why should increasing goal-achievement cause emotion in the sense I’ve defined it above?  There are two aspects to this:

 

  1. Factors tied to human evolutionary history
  2. Factors that are more based on information processing, and may apply beyond the human domain

 

Due to the existence of these second factors, I suspect that happiness, sadness and spiritual joy are emotions with some universality.  Due to the former factors, the specific flavor that these general emotions have in human beings, is definitely peculiarly human in character.

 

In humans, achieving a goal like finding sex or finding a good place to sleep or killing prey or producing babies naturally induces broad and uncontrollable physiological changes.  Achieving more abstract goals, in humans, tends to associatively bring forward patterns and processes associated with achieving these simpler primordial goals – thus activating broad patterns of physiological activity in ancient parts of the brain, and other parts of the body.

 

The evolutionary-history-bound nature of human emotions is well depicted in a snatch of dialogue from William Gibson’s novel Pattern Recognition (2003, p. 69) – a discourse by an advertising executive on the importance of humans’ odd cognitive architecture for his trade:

 

           

“It doesn’t feel so much like a leap of faith as something I know in my heart.” …

“The heart is a muscle,” Bigend corrects.  “You ‘know’ in your limbic brain.  The seat of instinct.  The mammalian brain.  Deeper, wider, beyond logic.  That is where advertising works, not in the upstart cortex.  What we think of as ‘mind’ is only a sort of jumped-up gland, piggybacking on the reptilian brainstem and the older, mammalian mind, but our culture tricks us into recognizing it as all of consciousness.  The mammalian spreads continent-wide beneath it, mute and muscular, attending its ancient agenda.  And makes us buy things.”

“… [A]ll truly viable advertising addresses that older, deeper mind, beyond language and logic.”

 

What of specific human emotions like lust, rage and fear?  Clearly these exist because we have specific physiological response systems for dealing with specific situations.  Fear activates flight-related subsystems; rage activates battle-related subsystems; lust activates sex-related subsystems.  Each of these body subsystems, when activated, floods the brain with intensive and diverse and hard-to-process stimuli, which are beyond the control of “free will” related processes.  Many of the responses of these body subsystems are fast -- too fast for virtual multiverse modeling to deal with.  They’re fast because primordially they had to be fast – you can’t always stop to ponder before running, attacking or mating.

 

Clearly, a large portion of human emotion has to do with the virtual multiverse modeler’s difficulties in modeling actions that come from the “older, deeper mammalian mind” and the yet more archaic reptilian brainstem.   Yet, this kind of awkward fusion of old and new brains is not the sum total of emotion, human or otherwise.  Let’s return to the notion of abstract happiness as emotion which accompanies goal-achievement.  When a human achieves a goal, the mammalian cortex responds in much the same way as it responds to the achievement of goals like finding food, getting sex, escaping from an enemy, or winning a fight.  But the induction of these mammalian circuits is not the only reason for the virtual multiverse modeler to get confused into relative inactivity.  There is also the fact that when a goal is achieved, not by a specific localized action, but by a complex coordinated activity pattern among many system components, this activity pattern may well have the property of being hard to model by the virtual multiverse modeler subsystems.  So, peculiarities of human evolution aside, it seems some kinds of goal achievement are more likely to cause emotion than others, purely on information-processing grounds.

 

 

AI Emotions

 

There’s no doubt that, unless an AI system is given a mammal-like motivational system, its emotional makeup will vastly differ from that of humans.  An AI system won’t necessarily have strong emotions associated with battle, reproduction or flight.  Conceivably it could have subsystems associated with these types of actions, but even so, it could be given a much greater ability to introspect into these subsystems than humans have in regard to their analogous subsystems. 

 

Overall, my conclusion about AI emotions is that:

 

 

It’s interesting to consider these issues in terms of the specific structures and dynamics of the Novamente AI system (Goertzel et al, 2003).  In this context, a specific prediction made by the present theory of emotions is that complex map dynamics will be more associated with emotions than other aspects of Novamente cognition.  Complex map dynamics involve temporal patterns that are hard to control, and that present sufficiently subtle patterns that the present is much better understood once one knows the immediate future.  One may infer from this a possible major feature of the difference between Novamente psychology and human psychology: the strongest emotions of a Novamente system may be associated with the most complexly unpredictable cognitions it has -- rather than, in humans, with phenomena that evoke the activities of powerful, primordial, opaque-to-cognition subsystems. 

 

On the other hand, what can we say about emotions in the case of a hybrid human-computer intelligence architecture like the “global brain mindplex” posited in (Goertzel, 2003a)?  In this case, it seems, the main source of difficult-to-model unpredictability in the mindplex’s mind will be the human component.  Thus, the subjective experience of a global brain mindplex would likely be one of continually being swung around by strong emotions, corresponding to complex patterns of change in the human mass mind.

 

 

References