Tuesday, February 19, 2008

Characterizing Consciousness and Will in Terms of Hypersets

This is another uber-meaty blog post, which reports a train of thought I had while eating dinner with my wife last night, which appears to me to provide a new perspective on two of the thorniest issues in the philosophy of mind: consciousness and will.

(No, I wasn't eating any hallucinogenic mushrooms for dinner; just some grilled chicken with garlic and ginger and soy sauce, on garlic naan. Go figure.)

These are of course very old issues and it may seem every possible perspective on them has already been put forth, without anything fundamentally being resolved.

However, it seems to me that the perspectives on these topics explored so far constitute only a small percentage of the perspectives that may sensibly be taken.

What I'm going to do here is to outline a new approach to these issues, which is based on hyperset theory -- and which ties in with various things I've written on these topics before, inspired by neuropsychology and systems theory and probabilistic logic and so on and so forth.

(A brief digressive personal comment: I've been sooooo overwhelmingly busy with Novamente-related business stuff lately, it's really been a pleasure to take a couple hours to write down some thoughts on these more abstract topics! Of course, no matter what I'm doing with my time as I go through my days, my unconscious is constantly churning on conceptual issues like the ones I talk about in this blog post -- but time to write down my thoughts on such things is so damn scant lately.... One of the next things to get popped off the stack is the relation of the model of will given here with ethical decision-making, as related to the iterated prisoner's dilemma, the voting problem, and so forth. Well, maybe next week ... or next month.... I certainly feel like working toward making a thinking machine for real, is more critical than exploring concepts in the theory of mind; but on a minute-by-minute basis, I have to admit I find the latter more fun....)

Hypersets

One of the intuitions underlying the explorations presented here is that possibly it's worth considering hypersets as an important part of our mathematical and conceptual model of mind -- and consciousness and will in particular.

A useful analogy might be the way that differential equations are an important part of our mathematical and conceptual model of physical reality. Differential equations aren't in the world; and hypersets aren't in the mind; but these sorts of mathematical abstractions may be extremely useful for modeling and understanding what's going on.

In brief, hypersets are sets that allow circular membership structures, e.g. you can have

A = {A}

A = {B,{A}}

and so forth. It follows that you can have functions that take themselves as arguments, and lots of other stuff that doesn't work according to the standard axioms of set theory.

While exotic, hypersets are well-defined mathematical structures, and in fact simple hypersets have fewer conceptual conundrums associated with them than the real number system (which is assumed in nearly all our physics theories).

The best treatment of hypersets for non-mathematicians that I know of is the book The Liar, which I highly recommend.

Anyway, getting down to business, let's start with consciousness, and then after that we'll proceed to will.

Disambiguating Consciousness

Of course the natural language term "consciousness" is heavily polysemous, and I'm not going to try to grapple with every one of its meanings. Specifically, I'm going to focus on the meaning that might be specified as "reflective consciousness." Which is different from the "raw awareness" that, arguably, worms and bugs have, along with us bigger creatures.

Raw awareness is also an interesting topic, though I tend toward a kind of panpsychism, meaning that I tend to believe everything (even a rock or an electron) possesses some level of raw awareness. Which means that raw awareness is then just an aspect of being, rather than a separate quality that some entities possess and not others.

Beyond raw awareness, though, it's clear that different entities in the universe manifest different kinds of awareness. Worms are aware in a different way than rocks; and, I argue, dogs, pigs, pigeons and people are aware in a different way from worms. What I'll (try to) deal with here is the sense in which the latter beasts are conscious whereas worms are not -- i.e. what might be called "reflective consciousness." (Not a great term, but I don't know any great terms in this domain.)

Defining Reflective Consciousness

So, getting down to business.... My starting-point is the old cliche' that

Consciousness is consciousness of consciousness

This is very nice, but doesn't really serve as a definition or precise characterization.

In hyperset theory, one can write an equation

f = f(f)

with complete mathematical consistency. You feed f, as input, f; and you receive, as output, f.

It seems evident, though, that while this sort of anti-foundational recursion may be closely associated with consciousness, this simple equation itself doesn't tell you much about consciousness. We don't really want to say

Consciousness = Consciousness(Consciousness)

I think it's probably more useful to say:

Consciousness is a hyperset, and consciousness is contained in its membership scope

Here by the "membership scope" of a hyperset S, what I mean is the members of S, plus the members of the members of S, etc.

This is no longer a definition of consciousness, merely a characterization.

What is says is that consciousness must be defined anti-foundationally as some sort of construct via which consciousness builds consciousness from consciousness -- but it doesn't specify exactly how.

Next, I want to introduce the observation, which I made in The Hidden Pattern (and in an earlier essay) that the subjective experience of being conscious of some entity X, is correlated with the presence of a very intense pattern in one's overall mind-state, corresponding to X. This idea is also the essence of neuroscientist Susan Greenfield's theory of consciousness (but in her theory, "overall mind-state" is replaced with "brain-state").

Putting these pieces together (hypersets, patterns and correlations), we arrive at the following working definition of consciousness:

"S is conscious of X" is defined as: The declarative content that {"S is conscious of X" correlates with "X is a pattern in S"}

In other words: Being conscious of a pig, means having in one's mind declarative knowledge of the form that one's consciousness of that pig is correlated with that pig being a pattern in one's overall mind-state.

Note that this declarative knowledge must be expressed in some language such as hyperset theory, in which anti-foundational inclusions are permitted. But of course, it doesn't have to be a well-formalized language -- just as pigeons, for instance, can carry out deductive reasoning without having a formalization of the rules of Boolean or probabilistic logic in their brains. All that is required is that the conscious mind has an internal informal language capable of expressing and manipulating simple hypersets.

To make this formal, one requires also a definition of pattern, which I've supplied in The Hidden Pattern.

OK, so much for consciousness. Now, on to our other old friend, will.

Defining Will

The same approach, I suggest, can be used to define the notion of "will," by which I mean the sort of willing process that we carry out in our minds when we subjectively feel like we are deciding to make one choice rather than another.

In brief:

"S wills X" is defined as: The declarative content that {"S wills X" causally implies "S does X"}

To fully explicate this is slightly more complicated than in the case of consciousness, due to the need to unravel what's meant by "causal implication." This is done in my forthcoming book Probabilistic Logic Networks in some detail, but I'll give the basic outline here.

Causal implication may be defined as: Predictive implication combined with the existence of a plausible causal mechanism.

More precisely, if A and B are two classes of events, then A "predictively implies B" if it's probabilistically true that in a situation where A occurs, B often occurs afterwards. (Yes, this is dependent on a model of what is a "situation", which is assumed to be part of the mind assessing the predictive implication.)

And, a "plausible causal mechanism" associated with the assertion "A predictively implies B" means that, if one removed from one's knowledge base all specific instances of situations providing direct evidence for "A predictively implies B", then the inferred evidence for "A predictively implies B" would still be reasonably strong. (In a certain logical lingo, this means there is strong intensional evidence for the predictive implication, along with extensional evidence.)

If X and Y are particular events, then the probability of "X causally implies Y" may be assessed by probabilistic inference based on the classes (A, B, etc.) of events that X and Y belong to.

In What Sense Is Will Free?

But what does this say about the philosophical issues traditionally associated with the notion of "free will"?

Well, it doesn't suggest any validity for the idea that will somehow adds a magical ingredient beyond the familiar ingredients of "rules" plus "randomness." In that sense, it's not a very radical approach. It fits in with the modern understanding that free will is to a certain extent an "illusion."

However, it also suggests that "illusion" is not quite the right word.

The notion that willed actions somehow avoid the apparently-deterministic/stochastic nature of the universe is not really part of the subjective experience of free will ... it's a conceptual add-on that comes from trying to integrate our subjective experience with the modern scientific understanding of the world, in an overly simplistic and partially erroneous way.

An act of will may have causal implication, according to the psychological definition of the latter, without this action of will violating the basic deterministic/stochastic equations of the universe. The key point is that causality is itself a psychological notion (where within "psychological" I include cultural as well as individual psychology). Causality is not a physical notion; there is no branch of science that contains the notion of causation within its formal language.

In the internal language of mind, acts of will have causal impacts -- and this is consistent with the hypothesis that mental actions may potentially be ultimately determined via determistic/stochastic lower-level dynamics. Acts of will exist on a different level of description than these lower-level dynamics.

The lower-level dynamics are part of a theory that compactly explains the behavior of cells, molecules and particles; and some aspects of complex higher-level systems like brains, bodies and societies. Will is part of a theory that compactly explains the decisions of a mind to itself.

My own perspective is that neither the lower-level dynamics (e.g. physics equations) nor will should be considered as "absolutely real" -- there is no such thing as absolute reality. The equations of physics, glorious as they are, are abstractions we've created, and that we accept due to their utility for helping us carry out various goals and recognize various regularities in our own subjective experience.


Connecting Will and Consciousness


Connecting back to our first topic, consciousness, we may say that:


In the domain of reflective conscious experiences, acts of will are experienced as causal.

This of course looks like a perfectly obvious assertion. What's nice is that it seems to fall out of a precise, abstract characterization of consciousness and will.

Free Will and Virtual Multiverse Modeling

In a previous essay, written a few years back and ultimately incorporated into The Hidden Pattern, I gave an analysis of the psychological dynamics underlying free will, the essence of which may be grokked from the following excerpt:

For example, suppose I am trying to decide whether to kiss my beautiful neighbor. One part of my brain is involved in a dynamic which will actually determine whether I kiss her or not. Another part of my brain is modeling that first part, and doesn’t know what’s going to happen. A virtual multiverse occurs in this second part of the brain, one branch in which I kiss her, the other in which I don’t. Finally, the first part comes to a conclusion; and the second part collapses its virtual multiverse model almost instantly thereafter.

The brain uses these virtual multiverse models to plan for multiple contingencies, so that it is prepared in advance, no matter what may happen. In the case that one part of the brain is modeling another part of the brain, sometimes the model produced by the second part may affect the actions taken by the first part. For instance, the part (call it B) modeling the action of kissing my neighbor may come to the conclusion that the branch in which I carry out the action is a bad one. This may affect the part (call it A) actually determining whether to carry out the kiss, causing the kiss not to occur. The dynamic in A which causes the kiss not to occur, is then reflected in B as a collapse in its virtual multiverse model of A.


Now, suppose that the timing of these two causal effects (from B to A and from A to B) is different. Suppose that the effect of B on A (of the model on the action) takes a while to happen (spanning several subjective moments), whereas the effect of A and B (of the action on the model) is nearly instantaneous (occurring within a single subjective moment). Then, another part of the brain, C, may record the fact that a collapse to definiteness in B’s virtual multiverse model of A, preceded an action in A. On the other hand, the other direction of causality, in which the action in A caused a collapse in B’s model of A, may be so fast that no other part of the brain notices that this was anything but simultaneous. In this case, various parts of the brain may gather the mistaken impression that virtual multiverse collapse causes actions; when in fact it’s the other way around. This, I conjecture, is the origin of our mistaken impression that we make “decisions” that cause our actions.



How does this relate to the current analysis in terms of hypersets?

The current analysis adds an extra dimension to the prior one, which has to do with what in the above quote is called the "second part" of the brain involved with the experience of will -- the "virtual multiverse modeler" component.

The extra dimension has to do with the ability of the virtual multiverse modeler to model itself and its own activity.

My previous theory discusses perceived causal implications between actions taken by one part of the brain, and models of the consequences of these actions occurring in another part (the virtual multiverse modeler). It notes that sometimes the mind makes mistakes in perceiving a causal implication between a collapse in the virtual multiverse model and an action, when a more careful understanding of the mental dynamics would reveal a more powerful causal implication in the other direction. There is much evidence for this in the neuropsychology literature, some of which is reviewed in my previous article.

The new ingredient added by the present discussion is an understanding that the virtual multiverse modeler can model its own activity and its relationship with the execution of actions. Specifically, the virtual multiverse modeler can carry out modeling in terms of an intuitive notion of "will" that may be formalized as I described above;


"S wills X" is defined as: The declarative content that {"S wills X" causally implies "S does X"}



where "S" refers specifically to the virtual multiverse modeler component, the nexus of the feeling of will.

And, as noted in my prior essay, it may do so whether or not this causal implication would hold up when the dynamics involved were examined at a finer level of granularity.

Who Cares?

Well, now, that's a whole other question, isn't it....

Personally, I find it interesting to progressively move toward a greater and greater understanding of the processes that occur in my own mind everyday. Since understanding (long ago) that the classical notion of "free will" is full of confusions, I've struggled to figure out the right attitude to take in my own mind, toward decisions that come up in my own life.

Day after day, hour after hour, minute after minute, I'm faced with deciding between option A and option B -- yet how seriously can I take this decision process if I know I have no real will anyway?

But the way I try to think about it is as follows: Within the descriptive language in which my reflective consciousness exists, my will does exist. It may not exist within the descriptive language of physics, but that's OK. None of these descriptive languages has an absolute reality. But, each of these descriptive languages can usefully help us understand the others (as well as helping us to understand the world directly); and having an understanding of the systematic biases made by the virtual multiverse modeler in my brain has certainly been useful to me. It has given me a lot more respect for the underlying unconscious dynamics governing my decisions, and this I believe has helped me to learn to make better decisions.

In terms of my AI work, the main implication of the train of thought reported here is that in order to experience reflective consciousness and will, an AI system needs to possess an informal internal language allowing the expression of basic hyperset constructs. Of course, in different AI designs this could be achieved in different ways, for instance it could be explicitly wired into the formalism of a logic-based AI system, or it could emerge spontaneously from the dynamics of a neural net based AI system. In a recent paper I explored some hypothetical means via which a neural system could give rise to a neural module F that acts as a function taking F as an input; this sort of phenomenon could potentially serve as a substrate for an internal hyperset language in the brain.

There is lots left to explore and understand, of course. But my feeling is that reflective consciousness and will, as described here, are not really so much trickier than other mental phenomena like logical reasoning, language understanding and long-term memory organization. Hypersets are a different formalism than the ones typically used to model these other aspects of cognition, but ultimately they're not so complex or problematic.

Onward!

2 Comments:

Blogger Daniel Lewis said...

Hi Ben.

Many thanks for such an interesting blog post. I have just written about Hypersets and the Semantic Web on my blog post titled "Semantics and Hyperdata".

If what you suggest logically makes sense, then we could technically define Consciousness in OWL/RDF.

Any thoughts?

Daniel.

3:58 PM  
Blogger Terry said...

Wow, great post. I am new to this blog and your work, so if I am going over old ground, I apologize.

So, it seems that in an internalists language, you are equating the manifestation of a humonculous as a hyperset and are embracing rather than fighting the recursion issues. Am I on the right track?

Further in your discussion on will, it seems like you stop the recursion practically at the moment of measurement (a la quantuum physics) and collapse all potential UODs to one at the moment of decision. Cool. Don't you end with huge resource contention issues, though?

12:28 PM  

Post a Comment

<< Home