DynaPsych Table of Contents

network-dynamical, informational and phenomenal aspects

Copyright Mitja Perus, 1997
National Institute of Chemistry, Lab. for Molecular Modeling and NMR
Hajdrihova 19 (POB 3430), SI-1001 Ljubljana, Slovenia
Slovene Society for Cognitive Sciences


After an introduction to the multidisciplinary-posed consciousness problem, information dynamics in biophysical networks (neural, quantum, sub-cellular) underlying consciousness will be described in the first part. In the second part representational and intentional characteristics of informational, reflective and phenomenal aspects of consciousness and self-consciousness will be discussed. At the end, recent debate of qualia will be presented and commented. Thus, the aim of the article is to present a broad overview of multi-level aspects of consciousness.

1. The current consciousness debate

Recently, the problem of qualia (the nature and origin of qualitative subjective experiences, "how it is to be like", the first-person perspective) and the problem of phenomenal consciousness (as opposed to its informational aspect) have been identified as the "hard" problem (Chalmers, 1995; TucsonII, 1996). The problems of system-theoretical background of consciousness and information processing (the naturalistic, third-person perspective) have, on the other hand, been defined as the "easy" problem, because they are not as puzzling as the phenomenal qualia (Banks, 1996; Hubbard, 1996).

Searle (1993), as a typical philosopher with non-reductive views (as opposed to eliminativist materialists like the Churchlands), defines consciousness as a subjective qualitative process of awareness, sentience or feeling. Consciousness has a multiple nature which is at the same time synthesized into a unity: multi-modal perceptions and representations are always unified into a single undividable experience (binding problem). For consciousness self-reference is essential. It is connected with the qualia in the sense that it constitutes an "I" which satisfies the following combination: objects and I (subject) give together qualitative experience (of the object by the I). Qualia are ways that things seem to us (Dennet), i.e. redness of a red apple, auditory feeling of a melody, feeling in one's own body, and feeling how it is to be someone, to experience himself. They are resistant to all physicalist or reductionist explanations which are, by definition, quantitative and therefore limited to the third-person aspect only (Flanagan, 1992).

Let us present the views of the most important "lobbies" in consciousness research. The first large group treats consciousness as an emergent phenomenon (bottom-up causation). The second group says that our phenomenal (including physical) world reveales itself only through consciousness (up-down causation). I think that these views can be made complementary (bi-directional causation) as complex systems teach us. Further-on, there are two more groups: those, who argued that neural networks are sufficient for consciousness (Baars, 1997), and those, who offer quantum systems as binders of neurally-processed perceptions and thoughts into an unified experience using quantum coherence (Penrose, 1989, 1994; Lockwood, 1989). Protagonists of the second theory present microtubular networks and other sub-cellular structures (dendritic nets, presynaptic vesicular grids, cellular membranes, etc.) as "interfaces" between quantum and neural information-processing systems.

In quantum world there is a deterministic and computable Schrodinger dynamics which is interrupted by a random and non-local, or non-computable, wave-function (objective) reduction. This "collapse", which occurs during a quantum measurement, promotes a parallel-distributed microscopical quantum process to a localized macroscopical classical phenomenon. It is argued that this process is relevant for unconsciousness-to-consciousness-transitions or/and memory-to-consciousness transitions. Examples are transitions from superpositions of microtubular configurations to a selected configuration representing some cognitive information (Hameroff, 1994).

In neural nets there are similar effects, but perhaps they are not enough for holism of consciousness. Therefore quantum Bose-Einstein condensation of particles into an indivisible coherent quantum whole may be the final answer. Quantum events in synapses, like tunneling, may influence neurotransmitter release which is essential for neural memory (Walker, Beck). This is another example of neuro-quantum influence which may be disrupted by anaesthetics (Hameroff et al., 1996).

Large-scale neuroscience presents some brain areas which are candidates for conscious processing, or which would make some cognitive processes in other brain areas conscious: striate cortex (V1) or extrastriate cortex (V2-V5), prefrontal cortex, reverberatory activity in pyramidal neurons in cortical layers 5 and 6, hippocampus, thalamus (or cortex-thalamus dialog), intralaminar nuclei, reticular formation, and co-operation of these areas. Some scientists point out that correlated neural 40 Hz oscillations bind representations into unified conscious experiences (Hameroff et al., 1996; Newmann, 1997).

Here are some characteristic views of leading cognitive neuroscientists. Gray (TucsonII, 1996) argues that consciousness is not necessary for: analysis of sensory input and speech, learning, memory, selection of response, control of action, production of speech output. Consciousness comes too late to regulate behaviour. We are conscious of the output, not the processing itself. Conscious process is those who wins the competition of many information processes. Consciousness is more like fame, not like a large group of people (where some are famous and the other treat the first ones as famous) alone. Specific groups of neurons encoding specific information were found. On the other hand, the brain works globally and integrates all local-group-firings (Baars, 1997).

Greenfield (TucsonII, 1996) argues that consciousness is spatially multiple, temporally unified, continuously dynamic and derived from a stimulus. On the other hand, Libet and others report time-projections of conscious experiences (e.g., backwards in time, so that we have impression that sensation and conscious registration occured simultaneously, not with a delay due to processing). Hobson says that the level of conscious changes is a function of activation, focus of consciousness is a function of input/output gating, and form of conscious changes is a function of modulatory neurotransmitter ratios.

The neuroscientific findings which will have to be considered in consciousness theories can be summarized as follows: From EEGs only (non)attention or similar global or general activities are evident. Abstract concepts exist without language. Large damages of the brain do not necessarily lead to loss of consciousness. Blindsight patients say that they do not see, but they, if forced, react as they would indeed see (they actually perceive, but are not conscious of seeing). If their visual cortex is intact, but other cortex areas are paralized, there is no vision. So, visual cortex is essential for vision, but alone not enough. All that we can know about quality of subject's awareness is his own report.

According to the "holon" theory, states of one level are rules of the other level, and opposite, so infinitely in both directions, with an overall self-referential loop. Velmans speaks about subjective psychical projection of the state of the neural correlate of consciousness back into the physical world. So, pain does not come from physical space of the body (e.g., finger), but originates in the physical space of the brain and is then projected as if it would occur in the finger.

Part I: System-processual backgrounds of consciousness

Neuronal patterns representing the "objects of consciousness", synaptic patterns constituting memory Science cannot explain phenomenal qualia of conscious experience yet. But it can provide a lot of important knowledge about the processual and representational backgrounds of cognition and consciousness. It seems that not neural networks alone, but coupling of the neural and quantum levels exhibit the required system-dynamical basis.

Models using parallel-distributed processing are the most relevant for micro-cognition modeling. They are suitable also for understanding intuitive or aconceptual mental processes. Therefore they provide a complementary and a neuro-processing basis for rational and logical thought-processes which are analyzed by cognitive science based on classical artificial intelligence.

Neural-network models are the most well-known ones which process information in parallel and in a collective way through-out the whole network. Associative neural networks of Hopfield type (Amit, 1989; Peretto, 1992) are the simplest model of such symmetrical complex systems. This simplest model includes the system of basic elements (formal neurons), and the system of connections (formal synapses) which represent strengths of interactions between two neurons connected or the correlation between them. Symmetrical means that the same signal-summation-process is going on in every neuron, or that neurons are functionally equivalent.

Here we will consider neural networks as a model of a complex system which could be used in various ways depending on the interpretation of basic elements (formal neurons) and their connections (formal synapses). In the beginning formal neurons and synapses will be nervous cells and synapses. Later on this interpretation will be attributed to some quantum elements (quantum "points", particles or their spin), etc.

It will be shown that neural as well as quantum complex systems realize similar collective dynamics if we neglect the role of "anatomy" of an individual element (formal neuron) of the network (details in Perus, 1995c, 1996, 1997a).

Activities of many neurons form neuronal configurations. Firing configurations which are especially stable, because they represent energy minima, are called neuronal patterns. Specific patterns are correlated with specific external objects: whenever an external object is perceived its corresponding pattern is reconstructed. Thus, neuronal patterns, which are physiological correlates of mental representations, are carriers of information and have a specific meaning. Every synapse changes its strength proportionally to the correlations of neurons (Hebb "learning rule"). Patterns of synaptic transmissions represent memory (Peretto, 1992).

In such a network neuronal patterns which act as attractors are formed (Amit, 1989). This means that large groups of neurons constitute a sort of organizations which are distributed all-over the network. We do not experience neuron's exchange of signals or their patterns as "mosaics", but only their virtual, emergent aspect - attractors. Patterns acting as attractors are those dominant neuronal configurations each of which lies at the bottom of its (free) energy minimum, and causes the convergence of all the neighbouring configurations (which lie within the "basin of attraction" of an attractor) towards the nearest pattern-qua-attractor.

Such patterns-qua-attractors represent categories or gestalts (Haken, 1991). Gestalts are some qualitative unities arising from collective system-dynamics which cannot be reduced to the sum of activities of the constituting basic elements alone. Patterns-qua-attractors thus represent some mind-like representations, because they are isomorphic to some environmental objects (Peru s, 1995b). Such patterns are not only some collective neuronal states, but also encode a specific information. Whenever a specific object occurs in the environment, the reconstruction of a specific neuronal pattern-qua-attractor is triggered. Actually, a superposition of the sensory stimulus-pattern and the most similar memory-patterns (coded in the matrix of synapses) is formed in the system of neurons. The system of neurons is a carrier of those information which is currently processed (i.e., which is the most important in that specific context or that circumstances, or which is mostly correlated to the state of the environment). It could be said that the pattern of neuronal activities represents the object of consciousness (which has to be, of course, distinguished from the consciousness-in-itself, or the phenomenal qualia, respectively.

The system of synaptic connections represents (long-term) memory. The strengths of these connections between neurons are proportional to the correlation of activities of the two neurons connected. So, the matrix of synaptic connections represents autocorrelations of neuronal patterns (Hebb learning rule). By this matrix (= memory) neuronal patterns (= virtual "objects of consciousness") are transformed into new patterns. This is an association. Such transformations may be connected into associative chains or temporal pattern sequences, which are origins of thought processes (Peretto, 1992; McClelland et al., 1986).

Recall of a memory pattern takes place when an external pattern interacts with the system of synaptic connections. This causes the occurence of the nearest (the most similar) memory pattern in the system of neurons (the object of consciousness). Actually, during the recall process a superposition of an external pattern and of all the relevant memory patterns (i.e., expectations, presumptions) is made in the system of neurons. Selective "moving" of patterns from the system of neurons ("consciousness") to the system of synapses (memory), and vice versa, is realized by continuous "interplay" of neurons and their signals through synapses.

To summarize, neural networks can realize the micro-structure of cognition: pattern recognition, associations, adaptation, content-addressable memorization and recall (partial information triggers reconstruction of the whole information), forgetting, categorization, compressed data storage, selection and abstraction of all relevant data, the basis of attention, etc. (McClelland et al., 1996).

On the other hand, with neural networks alone, we are not able to include consciousness into a general theory of mental processes, although associative neural networks realize many of the characteristics which are essential for the processing basis of consciousness. They realize recursive, auto-reflexive information processes. Neuronal patterns interact with each other and with themselves, because their constitutive neurons are constantly interacting. This self-interaction of neuronal patterns is a global process encompassing a web of local interactions, where the individual neurons represent eachother's context and content. However, even such self-referential, collective processes seem not to suffice for the unity of consciousness as a global gestalt-process.

There are many indications that unintentional consciousness may be trans-individual, or trans-personal, and thus cannot be limited to the neural brain and to its virtual structures alone. Quantum or sub-quantum systems include several phenomena which may be related to consciousness - e.g., non-locality, undividedness, long-range coherence (Bohm, 1980; Bohm & Hiley, 1993; Kafatos & Nadeau, 1990). An important support for the quantum hypothesis are meditational or mystical experiences (Perus, 1997e), the collective (un)conscious, and many hypothetical parapsychological phenomena.

The fact that on the one hand neural-network-processes and their virtual processes are very relevant for consciousness, but on the other hand various sub-cellular and quantum-biological processes seem also to be relevant, raises a question of relation between the neural, the virtual, the sub-cellular and the quantum levels.

Why quantum systems?

It seems that also a very large and complex neural network would not be sufficient for consciousness. There are indications that consciousness is connected with quantum phenomena (Lockwood, 1989; Goswami, 1990). The main reasons for this hypothesis are the following:

On the other hand, all thought-processes, including consciousness, seem to emerge from complex-system-dynamics. The objects of consciousness and the stream of conscious thought seem to be represented in some physical or at least informational (virtual) "medium". That "medium" has to be a complex-system which only is enough flexible, fuzzy, adaptive, and has good self-organizing and recursive abilities. Because the mathematical formalism of the neural network theory is confined to the collective system-dynamics, it remains to a large extend valid also for complex systems of other basic elements (Perus, 1997c). As it has already been emphasized, our formal neurons and formal synaptic connections are not necessary biological neurons and synapses (or, in models, artificial neurons - processors). There are various candidates in biological systems which may be modeled in a neural-network-like way and have, hypothetically, a relevant role in micro-cognition processes:

Quantum theorists favour Frohlich coherent, laser-like, collective biophysical processes in living systems. They also discussed systems of spins and electrical dipoles, and even delocalized states over the whole space-time as treated by the string-theory. In the relativity theory all times and spaces exist simultaneously, only our consciousness gives us impression of time-flow and "now" by "illuminating" (manifesting) succesive portions of space-time locally. Due to Hameroff and Penrose (1994), sequences of irreversible quantum collapses represent the origin of the arrow of time, and connect the problem of consciousness with relativistic space-time geometry and quantum gravity.

There are several compact theories which serve as starting points to model various cognitive levels using mathematical language:

Multi-level coherence in the brain

The unintentional consciousness ("consciousness-in-itself", "pure" consciousness), especially transcendental (mystical) consciousness, may be associated with the quantum field, or better, with the "overall sub-quantum sea" - the holomovement by Bohm and Hiley (1993). On the other hand, intentional consciousness (consciousness about some object of consciousness) cannot be associated merely with a specific quantum-informational state. If a specific mental representation (neuronal pattern-attractor) is processed under the control of consciousness, it is coupled or correlated with a specific quantum eigenstate which was explicated by the "wave-function collapse". So, by our hypothesis, intentional consciousness is a result of neuro-quantum interactions (interactions of classical and quantum world). The "wave-function collapse" is a transition of the quantum state from a state described by a linear combination of many quantum eigenstates to a "pure" state which is one eigenstate (one eigen-wave-function) only. Simply to say, a superposition of many "quantum patterns" is transformed into one "quantum pattern" only. The "wave-function collapse" means a selective projection from the sub-conscious memory to the conscious representation which was explicated from the memory. There are two possible versions of memory and memory-recall: the quantum one (just mentioned), or the classical neural one. Memory may be a parallel-distributed pattern of the system of synaptic connections, but it may also be more subtle ("parallel worlds" of the many-worlds interpretation of quantum theory by Everett, implicate order of Bohm and Hiley, etc.).

Mind is necessarily a multi-level phenomenon, because we cannot totally divide intentional consciousness from the object of consciousness which may be an internal virtual image or a real external object. We may take that the unintentional consciousness is of quantum nature; virtual representations are associated with neuronal patterns; neurons and external object are of classical nature. Then the intentional consciousness necessarily emerges from a combined neural-quantum coherence which is furthermore correlated with a classical state in the (external) environment.

There is still a question of relationship between quantum-informational states and virtual representations. Virtual representations are usually based on neuronal patterns which act as attractors. But an attractor is a contextual gestalt-structure which cannot be reduced to the neuronal pattern (which represents attractor's kernel) alone. So, virtual structures are attractors which thus overbuild their constitutive material basis. They represent complex webs of relations and ratios: a neuronal pattern is an attractor only if it is more stable and more dominant in the system-dynamics than the neighbouring neuronal configurations in the configuration-space.

Quantum mechanics governed by the Schrodinger equation does not exhibit attractors, but they are formed in the case of the "wave-function collapse". In that case, because of the interaction of a classical macroscopical system (measurement apparatus; environment, or our sensory apparatus, respectively) with the quantum system, the wave-function "collapses" and a specific quantum eigenstate (a "quantum pattern") occurs as an attractor. So, there are quantum virtual structures also, and they cannot be reduced to a quantum eigenstate alone, because they occur only as a result of interaction with a classical system. Thus, quantum virtual structures are (re)constructed as a result of the so-called quantum measurement. The "measurement apparatus" may be our sensory and associative neural system directly, or a machine which is then observed by that neural system - that's an indirect way. In both alternatives the "wave-function collapse" occurs as a result of a specific interaction with a classical system. The possibility of the "collapse" is very much higher if the interaction is knowledge-based (like in the case of a radio: if we know the right frequency, we are able to receive the associated information).

So, again, virtual structures cannot be reduced to the corresponding state of neural or quantum "medium", although they are tightly connected with it! Virtual states are always non-local, or parallel-distributed, respectively. They cannot be measured, or can be measured only indirectly - through the states of their corresponding neural or quantum "ground". For the sake of modeling and analysis we indeed have to distinguish neural, quantum and virtual levels, and environmental influence also. In an "organic synthesis" they are, of course, involved in an united process, including environment. That united process is called (intentional) consciousness.

Mathematical and system-theoretical analogies of neural and quantum networks

We have presented some reasons why one has to be motivated for research of parallels between quantum processes and neural-network-processes. In this chapter it will be shown that the mathematical formalism of the quantum theory is analogical to that describing associative neural networks (Perus, 1995c, 1997a).

A quantum state can be described as a superposition of quantum eigenstates ("quantum patterns"). Analogously, a neural-network-state may be described as a superposition of neuronal patterns. In both cases the coefficients of this linear combination ("mixture") of patterns describe the influence (or mathematically: projection) of the corresponding pattern on the actual state of the system. Each pattern is represented by its own coefficient. The coefficient essentially describes how probable it is that the corresponding pattern will be reconstructed or recalled from memory. In fact, the coefficients (quantum probability coefficients or neural order parameters) represent the meaning of a pattern in a specific context (Haken, 1991). The meaning is a result of parallel-distributed dynamic relationships of the complex info-physical system.

Feynman's version of the Schrodinger equation has the same structure as the dynamic equation of neurons. The Feynman interpretation shows that the wave-function on a specific location and in a specific time is a result of summed influences from all other space-time points. Similarly, the neural dynamics actually means a spatio-temporal summation of signals from other neurons.

Transformations of the quantum system result from microscopic parallel-distributed interaction webs. They can be described by the Green function which is an autocorrelation function of quantum eigenstates. The Green function or propagator of a quantum system actually describes how the system transforms itself into a new state by exhibiting numerous internal interactions between its constitutive "quantum points" (some mathematical "basic elements" of the system). It is a matrix, which describes such a parallel-distributed transformation of the whole system from an initial state to the final state. Turning to neural nets, this is similar to the Hebb learning rule which is an autocorrelation function of neuronal patterns. A superposition of such (auto)correlation patterns represents memory. If parallel-distributed transformations using Hebb or Green correlation-matrices are interpreted as carriers of information, they are called associations. In the relativistic case, the so-called S-matrix has the role of quantum Green function, and our analogy still remains valid.

The "collapse of the wave-function" is a transition of a quantum state from the case of a linear combination of eigenstates to the case in which a "non-mixed" eigenstate is individually realized. It is a transition from implicate order (inactive, potential information) to explicate order (active, manifest information). The other unrealized eigenstates remain inactive in the implicate order. This is very similar to neuronal-pattern-reconstruction from memory. In memory there is a superposition of many stored patterns. One of them is selectively "brought forward from the background" if an external stimulus triggers such a reconstruction. In the quantum case a "wave-function collapse" also takes place as a result of the external influence of the experimenter (quantum measurement). In both cases, suitable informational context is necessary for the pattern-reconstruction or the "collapse" to occur. Human knowledge increases probability of such an event enormously, because knowing its part and presenting it to the system triggers the reconstruction of the whole pattern. This is the general characteristics of all homogeneous, symmetric complex systems like neural nets, holograms, (sub)quantum nets, etc. The environment selects those neural/quantum pattern which is the most similar (or is correlated) to the state of environment.

To emphasize once again, the neural-pattern-reconstruction and the "wave-function collapse" are results of a transition from the implicate order (with latent, implicit, potential information only) to the explicate order (carrying manifest, realized information) (Bohm & Hiley, 1993). These two processes may represent a basis for memory--consciousness transitions, or un-consciousness--consciousness transitions. The implicate order represents a combination of very many possible states or processes. It is analogous to the set of so-called "parallel worlds" or parallel sub-branches of the general wave-function offered by Everett. The explicate order, on the other hand, represents a state or process which is at a moment physically actualized - it is "chosen" from a set of potential (implicate) states, or is a result of their optimal "compromise". In memory, patterns are represented as potential information only (i.e., merely as correlations of these previously gain patterns). The influence from environment explicates these correlations, so that the whole pattern is manifested again. This explicated pattern (neural or quantum one) can then serve as the object of consciousness.

In neural networks the correlations between patterns are important for memory. In quantum mechanics the phase differences between different parts of the wave-function are important. Phase difference is a discrepancy between two oscillatory processes (e.g., time delay of their peaks). Phase differences control the time-evolution of probability distribution involving interference of the contributions of different stationary eigen-wave-functions. Thus, changing the phase relations between eigen-wave-functions is analogical to the learning-process in neural networks where new pattern-correlations are added into the synaptic correlation-matrix. This is also similar to holography (Bohm, 1980).

In the neural network theory there are uncertainty principles (Daugman) which are similar to the quantum uncertainty principle: inability of simultaneous determination of two conjugate observables (e.g., position X and momentum P). An interesting neural analogy of this uncertainty principle of Heisenberg is represented by inability of simultaneous determination of patterns in the system of neurons and of patterns in the system of interactions (formal synapses). We are unable to be conscious of a pattern in the system of neurons, and to control a pattern in the system of connections at the same time. Only one pattern, which is temporarily realized in the system of neurons, is explicated. So, we can be aware of this one pattern only which has been extracted from memory. All the others remain implicit in the system of interactions, or in the dynamics itself, respectively.

To summarize the uncertainty analogy, we are not able to control simultaneously a pattern in the system of neurons (representing the "object of consciousness") and patterns in the system of synaptic connections (constituting memory). This is similar to the quantum situation, where it is not possible to explicate (to unfold) all eigen-wave-functions at the same time.

There is an additional analogy corresponding to the previous one. The duality of the system of neurons and the system of connections reminds one of the double nature of particles and waves, or of the duality between the position (X-) representation and the momentum (P-) representation of quantum mechanics. Thus, the so-called position (X-) representation of quantum theory can be approximated by the system of neurons. The so-called momentum (P-) representation can, on the other hand, be associated with the system of interactions which regulates all transformations of the network-states.

Open questions concerning the fractal-like nature of the brain

Only some basic mathematical analogies were presented here; numerous other parallels can be found between the neural and the quantum processing. They prove that there is a subtle "division of labour" and an isomorphic cooperation between the neural and the quantum levels. These levels may be in a sort of fractal-relationship (infinite replicas of each other).

As we have already realized, there are several such psycho-physical levels, which are candidates for the "media" of cognitive processes: the neuro-level (systems of neurons and synapses, dendritic trees), the virtual levels (hierarchy of neuronal patterns-attractors), the subcellular level (cytoskeleton, especially microtubules (Hameroff, 1994), the quantum level (configurations of quantum particles with their spins, the sub-quantum level (non-local processes in the "vacuum" or "holomovement", "beables") (Bohm & Hiley, 1993), etc.

Although these levels are complex systems of various basic elements, their parallel-distributed collective dynamics is governed by very similar principles!

The question remains, whether an underlying "medium" of consciousness is always necessary, and which level codes some specific information. Are various levels carriers of specific mental processes also simultaneously, synthetically?

Every collective state of a complex system may constitute a specific gestalt (a specific virtual unity) which cannot be reduced to the state of constitutive elements of the system. Formation of a specific isomorphic (e.g., fractal) multi-level coherence is a central problem. Practice in our computer simulations of neural networks shows that we can by explicitly ruling the artificial-neuronal level, govern the artificial-virtual level also - implicitly. If our dynamic equations for neurons and synapses regulate the patterns only, the attractors always accompany this dynamics implicitly! Neuronal dynamic equations (represented in the computer program) are differential equations (with local range of validity), but attractor-structures may be mathematically described by variational calculus (with global range of validity). We do not necessarily need both mathematical descriptions - one is sufficient. Thus, we may organize one level and the others will automatically follow. This is the reason, why the reductionist approach usually works for all practical purposes.

Informationally relevant are contextual relationships of the elements of a system, not these units alone. So, consciousness is a complex system with a double nature: it has a physical aspect, and a psychical aspect, which is a result of a subtle internal self-referential interpretation-"mechanism".

The first main difference between the physical and psychophysical processes is that the quantum system itself is not intentional (does not carry any mental information), but mind is intentional (carries specific mental contents). The second difference is that the quantum system itself does not have any relatively independent environment, but brain does. Therefore the brain models its macroscopical environment in a specific and flexible manner by using the biological neural network as a macro-micro-interface and a (subconscious) preprocessor for an unified conscious experience which involves neuro-quantum coherence.

The only essential difference between mathematical formalisms of the quantum theory and the neural network theory (of course, if we forget the internal structure of the basic elements of the system, i.e., formal neurons and synapses) is imaginary unit (i) taking place in the Schrodinger equation. But this difference vanishes if one considers networks of neurons with oscillatory activities which can also realize associative content-addressable memory (Haken, 1991). Various associative processes may be realized by attractor neural networks, but in order to be conscious, it seems that they must have quantum correlates. In that case, the neural brain is a classical system, which acts (similarly to a quantum-measurement-apparatus) as a non-linear processing interface between the environment and the linear quantum-information processes.

There is something else that the neural networks and the sub-quantum system have in common; their functional processes transcend space-time-structures (Perus, 1995c). Like sub-quantum processes, neural attractors operate in " prespace". Neurons are, of course, located in space-time, but their virtual structures cannot be located. Specifically, if their constitutive neurons are mixed, but the strengths of their connections remain the same, then all the patterns-qua-attractors remain the same. Human perceptual system codes the correlated (similar) stimuli into topologically ordered structures; like elements are encoded close together according to Kohonen model. So, spatial order of neural maps emerges as a consequence of correlated stimuli of various types, arriving from various locations. According to the functional analogy between neural and sub-quantum processes, space-time can be treated as a special case of a correlation-network, because it is established as a result of self-organizing processes in the holomovement (or network). To summarize, space-time is a secondary structure, correlated parallel-distributed processes are primary and thus more fundamental.

The main problem of the brain-mind-modeling using neural networks and orthodox quantum mechanics is the fact that mind, and especially consciousness, are even more holistic than these models are. Consciousness transcends the necessary analytic division of a system into elements (formal neurons) and interactions (formal synaptic connections). Thus, well-defined basic units of cognitive information cannot be found; processes are more "fundamental". It seems that consciousness, and the sub-quantum "sea" as well, are a superposition of all possible quantum-informational "networks".

For unintentional consciousness, the connection with the "vacuum" or holomovement is the most relevant one. For intentional consciousness, a coherence of the (sub)quantum level with the neural, subcellular and virtual levels (including coupling with some environmental object) is necessary. Without this multi-level coherence, it cannot be imagined how one could be conscious of a macroscopic object detected by sensory neurons. One thus needs neuro-quantum mediators.

Part II: From informational (cognitive) to phenomenal consciousness and self-consciousness

The debate on the nature of mental representations

Intentional consciousness needs mental representations in order to represent the objects we are conscious of. As already presented, they are approximated by neuronal patterns-qua-attractors in neural network models.

Existence of mental representations was in the centre-point of discussions between cognitivists (representationalists) and ecologists (e.g., J.J. Gibson). Ecologists emphasize the importance of the environment and its continuous mutual interactions with the organism which unites the organism and its environment into an indivisible dynamical whole.

But the question remains whether specific external patterns are projected into specific internal representations (cognitivists' position), or does the brain only make specific transformations of specific input patterns to a specific output, i.e. a specific response of the organism to the stimuli from the environment (ecologists' position). In the first case representations (pictorial, propositional, linguistic) are more fixed and stable. They have well-defined semantic kernels which are relatively independent from their context. In the second case "representations" would be purely dynamic and transitional, with strong dependence on the context.

There are two main levels - cognitive (symbolic) and sub-cognitive (sub-symbolic) level. It must be emphasized that they coexist and that there are also many quasi-levels in-between (McClelland et al., 1986). The connectionist level constitutes an underlying system-processual medium for high-order mental representations - like quantum physics is a necessary processual background for classical physics. The invariance of cognitive representation-patterns is only an "envelope" for very complex internal dynamics on the sub-cognitive or sub-symbolic levels.

The neural medium is able to "absorb" each external pattern in order to "get into its shape". Each external pattern is not constantly represented in the brain, but only when the influence from environment forces it to reconstruct the corresponding representation. All un-interesting or un-important patterns are only abstractly coded in the system of synaptic connections and wait there for reconstruction or "unfolding" when neccessary. The brain makes a combination of environmental influences and of the use of representational codes in the memory (system of synaptic connections). It makes a superposition of external patterns and already stored internal codes (which represent organism's expectations). The environmental influence selectively extracts those features from the memory which are the most similar to the actual state in the environment. So, mental representations get into correlative coherence with external patterns, or, better to say, mental processes get into parallel synchronous dynamics with environmental processes.

The conclusion would be that there are some strongly environment-dependent representations in the mind, but are not static at all - they are very dynamic, flexible and adaptive, carrying only the filtered (abstracted) main characteristics of the patterns.

The structure of internal representations (and their semantic relation-network) is an isomorphic virtual image of the structure of environment (Perus, 1996b), including its individual patterns, their spatial and temporal correlations, and groups of environmental patterns. Brentano also says (Brentano, 1973, pp. 9) that our spatial and temporal world exhibits the same relations as those exhibited by the object of our perceptions of space and time.

To summarize, associative neural network models and their computer simulations support the view that epistemic intermediaries exist, but are strongly environment-dependent, dynamic, and only transitional. A proof, that representations exist, could be also the fact, that injured people, after the amputation of their limb, still feel it as they would still have it. For this feeling undoubtly transitional (temporary) intermediaries are responsible. Such intermediaries are not states, they are processes. The rate of transitionality varies inverse-proportionally to the rate of stability, invariance and importance of external patterns and their internal representational counterparts, to their frequency of occurence, and to the amount of attention paid to them.

Representational and intentional backgrounds of phenomena

Specific mappings of specific objects into specific mental representations (i.e., specific patterns of neural activity acting as attractors) constitute the intentional and representational basis of phenomenal experience, but cannot explain the qualitative nature of phenomena. Therefore I will postpone the discussion of qualitative component of phenomena, and now consider merely their informational (so-called "access"-consciousness) component (Davies & Humphreys, 1993) (Marcel & Bisiach, 1988).

From purely informational point of view, a phenomenon is an object perceived "through" the state of the neural system (Nelkin, 1996). In all phenomena, objects and their representations are always "bound together". Both, objects (as far as they are phenomena to us) and their representations, have no meaning or no existence one without the other. Furthermore, phenomena represent correlation or coherence, or even effective unification of objects with their mental representations. Brentano says that objects of sensations are merely phenomena, and that color, sound, warmth, taste etc. do not really exist outside our sensations, even though they may point to objects which do so exist (Brentano, 1973, pp. 9). But he says (ibid., pp. 69) that color is not seeing and sound is not hearing. We could say that color is a characteristics of an object (only when this object acts as a phenomenon to us!) only as much as it was gained through the process of seeing. Namely, we might have uncoupled object and neural system initially.

Objects and "their" characteristics (color etc.) can be phenomena only when they are perceived (seen etc.). So, seeing always effectively unifies the notions of object, phenomenon and its color into a virtual whole. Neural system must be coupled with the object through this process of seeing. This unification is only virtual and effective one - as it would establish a sort of higher-order meta-gestalt which compounds the mental (virtual, emergent) and the physical (system-processual) into one - into a "virtual unity". So, we must speak not about a man and an object separately, but about a man-seeing-an-object.

Phenomena do not have properties like shape and size, but they do posses analogues to those properties. Phenomenal states are not coloured, yet they correlate to colours: their variations (including varying scales, intensity, etc.) are "read out" in such a way that we are led to judge a conclusion that a property of external objects varies in a similar way (Nelkin, 1996). In such a way, phenomena act as image-like qualitative representations. An epiphenomenalist would say that they have no role in perception itself, but are co-effects of the processes that result in percepts. They somewhat indirectly accompany perception and alter in parallel with the external objective situation. On the other hand, non-reductionists would attribute causal role to phenomena. I would say that the last ones are right, accept in automatic (e.g., reflex) behaviour which is triggered before the irreducible conscious control of the subject's I is switched on and may, somewhat later, freely alter the actions.

Intentionality, judgements and emotional attitudes

According to Brentano, we never only think, but we think about something. A pattern of neural activity is "carrier" of a specific content (correlated with an external pattern - object).

Due to Brentano (1973, pp. 278), representations, judgements and emotional attitudes are three basic, but interdependent, classes of mental reference. Brentano (1973, pp. 265) writes: "The inner consciousness, which accompanies every mental phenomena, includes a presentation, a cognition and a feeling, all directed towards that phenomenon." Later (Brentano, 1973, pp. 276): "Every mental activity is the object of a presentation included within it and of a judgement included within it, it is also the object of an emotional reference included within it". "Nothing can be judged, desired, hoped or feared, unless one has a presentation of that thing." Using the neural network theory, we can describe how patterns get associatively connected, because they (like their constituent neurons) are connected to each other and represent context and content of each other. A representation is not only connected with other representations, but can also be symbolically represented (coded) by firing of a cardinal neuron or a cardinal ensemble of neurons, or virtually by so-called order parameters (Haken, 1991). The firings of cardinal neurons symbolizes the occurence of their corresponding representations.

Judgements are intentional, even volitional, psychical events, but their neural correlates are realized as "flipping" of cardinal neurons (or changing order parameters), which codes the mean-field situation ("general atmosphere", average) of the neural system and its global transitions. Judgement is affirmation or denial, and is neurally represented by an excitatory or inhibitory action of a cardinal neuron towards corresponding pattern which is the object of judgement. The strength of activity of a cardinal neuron symbolizes the degree of conviction which judgement is made with.

Here we should add that in biological neural networks even special "veto"-cells exist, which are protagonists of processes underlying judgements. But the role of "veto"-cells is limited to the context of system-dynamics only, i.e. the system triggers them. So, nothing volitional can be traced on the neural level.

In a larger sense, emotional activity is love or hate which is connected with higher-order pattern-agreement (mutual supporting) or disagreement. This causes convenience (pleasure etc.) or inconvenience (suffering etc.). The mechanism underlying judgements (how they arise and what mental effects they have) and emotions can be modeled by neural networks, but the involvement of consciousness in the sense of free will and self-awareness cannot be satisfactorily understood this way.

Intentional consciousness, awareness and self-awareness

Intentionality means that every mental process has always a reference to a content or is directed upon an object (phenomenon). This is particulary characteristical for consciousness (except in transcendental mystical states which are un-intentional). Due to Brentano (1973), intentionality represents a typically psychical phenomenon which cannot be reduced to physical phenomena, so it is an example of essential difference between the psychical and the physical. {The second difference is, according to Brentano (1973, pp. 85), that all physical phenomena have extension and spatial location, but mental phenomena (thinking, willing etc.) appear without extension and spatial location. On the other hand, quantum physics and parallel-distributed complex systems show that this division can be melted away. It is namely only a result of being-inside-an-attractor (extension, localization) or being-beyond-local-attractors ("flowing freely" across the set of possible system's states).

Dennett adds that action is intentional only, if the actor is aware of action spontaneously ("automatically", without observation of the action). His example (Dennet, 1969, pp. 165): If somebody is tapping in the rythm of "Rule, Britannia" and is not aware of this (other people recognize this), then such tapping-in-a-rythm is not intentional.

Before we start to discuss the problem of consciousness and self-awareness (consciousness of consciousness) from the points of view of Brentano and of the neural network theory, we have to emphasize tight connections and inter-dependence between representations and re-representations (Oakley, 1990). Brentano (1973, pp. 127) claims that there is a special connection between the object of inner representation and the representation of the representation, and so on. Let us quote Brentano (1973, pp. 127):
"The presentation of the sound and the presentation of the presentation of the sound form a single mental phenomenon; it is only by considering it in its relation to two different objects, one of which is a physical phenomenon and the other a mental phenomenon, that we divide it conceptually into two presentations. In the same mental phenomenon, in which the sound is presented to our minds, we simultaneously apprehend the mental phenomenon itself. What is more, we apprehend it in accordance with its dual nature insofar as it has the sound within it, and insofar as it has itself as content at the same time."

Everything what Brentano has claimed is precisely valid also if we understand the term "(re)presentations" as "neuronal patterns-qua-attractors". Namely, neuronal patterns are "reflecting each other" (heteroassociation) or are "reflecting themselves (into themselves)" (autoassociation) (Perus, 1995a). Neurons and patterns consisting of neurons represent context and content to each other, and patterns represent context and content to themselves within themselves, because the neurons, which constitute them, are constantly interacting.

The recursive "self-intentionality" is the basis of the process of self-awareness. Memory is represented by the correlation-patterns in the system of synaptic connections. The object of consciousness is represented in the pattern of the system of neurons which is involved in a global associative connection or interplay with many other stored patterns. Awareness and self-awareness might correspond to Brentano's discussion of reflective re-representations (although Brentano did not explicitly mention self-awareness). Brentano (1973, pp. 128) continues:
"If an inner presentation were ever to become inner observation, this observation would be directed upon itself.
One observation is supposed to be capable of being directed upon another observation, but not upon itself. The truth is that something which is only the secondary object of an act can undoubtedly be an object of consciousness in this act, but cannot be an object of observation in it. Observation requires that one turns his attention to an object as a primary object. (...) Thus we see that no simultaneous observation of one's own act of observation or of any other of one's own mental acts is possible at all. We can observe the sounds we hear, but we cannot observe our hearing of the sounds. On the other hand, when we recall a previous act of hearing, we turn toward it as a primary object, and thus we sometimes turn toward it as observers. In this case, our act of remembering is the mental phenomenon which can be apprehended only secondarily."

Thus, Brentano is a representative of a "copy-theory" of self-awareness. A man can be aware of a copy or recalled image of a just-passed-away mental event, but not of this mental event directly.

We must note that there is one exception to the exclusion of simultaneous representations and re-representations: experiences of mystical unity - insofar they are un-intentional (Perus, 1997d). They correspond to coherent symmetrical dynamics of the neural substrate on biological level and global-attractor-formations on higher virtual levels (pattern-superpositions or simultaneously coexisting representations merge into an uniform whole). Quantum correlates (e.g., Bose-Einstein condensates) of such processes are very probably also relevant.

Brentano admits the complementarity of the first-order consciousness and the accompanying second-order consciousness (1973, pp. 129):
"The consciousness of the presentation of the sound clearly occurs together with the consciousness of this consciousness, for the consciousness which accompanies the presentation of the sound is a consciousness not so much of this presentation as of the whole mental act in which the sound is presented, and in which the consciousness itself exists concomitantly. Apart from the fact that it presents the physical phenomenon of sound, the mental act of hearing becomes at the same time its own object and content, taken as a whole."

We shall conclude with Brentano (1973, pp. 134) that, if we see a color and have a representation of our act of seing, the color which we see is also present in the representation of this act. This color is the content of the representation of the act of seeing, but it also belongs to the content of the seeing. It is well known that there are good self-interacting candidates of correlates of these self-reflective mental processes on the levels of neural, sub-cellular and/or neural networks arising from iterative fractal-like dynamics (Perus, 1997b).

The Rosenthal-Gennaro theory of consciousness entailing self-consciousness

The relation of consciousness and self-consciousness was discussed recently also by Rosenthal and Gennaro. Gennaro (1995), following the higher-order-thought theory of (self)-consciousness by Rosenthal (e.g., in Davies & Humphreys, 1993, pp. 197-224), shows that consciousness entails self-consciousness. (Note that the English notion "consciousness" cannot be well translated into many other languages, because it is used in a relatively broad sense. For example, German "das Bewuss tsein" or "zavest", "svijest", "svest" in South-Slavic languages, which are usually translated as "consciousness", have implicit meaning of "self-consciousness" (much more than in English). This situation is in agreement with Gennaro's welcome, and in English context not trivial, idea that consciousness entails self-consciousness.)

Gennaro (1995) argues that a mental state S becomes conscious if it is accompanied by a meta-psychological thought M that one is in that mental state S. Examples of meta-psychological states are thoughts, beliefs, desires, wishes, hopes, fears. But not all such second-order states can render a first-order mental state conscious - it must be a metapsychological thought directed at the first-order state, Gennaro says. "Self-consciousness does involve an explicit (albeit unconscious) thought and accompanies all conscios experience." (Gennaro, 1995, pp. 18)

Gennaro distinguishes several kinds of (self-)awareness. For example, he states that worms or flies cannot be consciously aware, but they can be behaviorally aware. A day-dreaming truck driver must have been behaviorally aware of the turn in the road in order to drive safely, but he lacked real, conscious awareness.

Introspection, he writes, is having a conscious thought about own mental state. Having conscious thought does not entail introspective awareness. In conscious states we are often not consciously thinking about our own thoughts, but we are nevertheless thinking about or having (unconscious) "thought awareness of" them. So, introspection is a special form of self-consciousness which includes conscious, not ordinary, higher-order thoughts. A meta-psychological state M is conscious when a higher meta-psychological thought MM is directed on it. In introspection, a mental state S is accompanied by M, and, furthermore, there is a MM directed at M. Deliberate introspection is the most sophisticated kind of a S-M-MM loop.

Gennaro does not define qualia as introspective capacities, but rather as externally experienced. Nevertheless, they must be understood as features or properties of neural processes when observed from the third-person perspective, he argues in a materialist-like way. In spite of following the token identity theory (every mental event is a physical event), he thinks that it is best to explain consciousness in mentalistic terms. He writes (p. 126, emphasis his):
"However, just having the appropriate neural event will not guarantee that the relevant quale is experienced qua quale. We should distinguish the existence of a quale qua neural property from its existence qua experienced quale. The former can exist without the latter and I suggest that the higher-order awareness of that neural property is what brings about its existence qua experienced quale." Is this an example of "promisory materialism", as Tart defined it?

One can have a particular phenomenal state without its typical quality, like in the following case. One can have muscle twinges while sleeping, and that causes changing position (Gennaro, 1995, p. 8). Thus unconscious phenomenal states are possible and they may, Gennaro presumes, share certain underlying neural processes with conscious phenomenal states.

These are kinds of Gennaro's conscious states:

  1. conscious phenomenal states
  2. conscious world-directed non-perceptual intentional states (e.g. desires and thoughts)
  3. self-consciousness

Beliefs can be understood dispositionally and one can have many beliefs at the same time, but thoughts are momentary mental events, therefore beliefs cannot be meta-states needed for making first-order states conscious, but thoughts (which are conceptual, intentional) can be. The right meta-states must not arise in an inferential manner and meta-awareness must be direct or immediate (Gennaro, 1995, p. 101).

In spite of, or even because of, providing clear definitions of notions used, I am in a minor disagreement with his emphasis on the necessity of higher-order conceptual thoughts for self-consciousness. For example, in meditational states conceptual thought is weakened (and surely there are no linguistic or propositional concepts), but self-consciousness usually persists, although in an altered (non-focused) way. Moreover, higher-order-thought theory gives a description of system-processual backgrounds of self-consciousness (i.e. "access self-consciousness"), but still no explanation of qualitative, phenomenal characteristics of (self-)consciousness which have some trans-conceptual features.

Then, Gennaro (1995) offers a systematic sequence of logical arguments for ideas of his and of other philosophers of mind: A system cannot have thoughts and cannot have phenomenal states which are all unconscious. Because being a conscious system presupposes being able to modify one's own behavior on the basis of internal mental states, and as much as behavioral modification entails self-consciousness, being conscious entails being self-conscious (the behavior argument).

Because being conscious entails having some intentional attititudes which entail having de se attitudes (i.e., self-ascribed, self-attributed desires, thoughts, beliefs, etc.) which furthermore entail self-consciousness, being conscious entails being self-conscious (the de se argument). Later he replaces this version of the argument (because of realizing that de se attitudes are not necessary for having intentional attitudes directed at other things, etc.) by the following final from: Being conscious entails having conscious thought; this furthermore entails having de se thought (thoughts about one's own thought), and this entails self-consciousness; therefore consciousness implies self-consciousness.

The last main argument is the following memory argument. Because consciousness entails episodic memory which entails self-consciousness, therefore being conscious entails being self-conscious. Let's recall that the episodic or "autobiographical" memory and semantic memory are two subsets of declarative memory (knowing that). These sorts of memory are propositional - in contrast to procedural memory (knowing how) which is not suitable for the author's memory argument, since it does not need (self-)consciousness. Episodic memory requires temporal dimension and a concept of the past, and this is supposed to entail the self-concept. With the loss of a sense of past comes the lack of one's grasp of the future, therefore the inability of comparing and planning one's own actions results, he says. A soloist does not only remember some man being listened by other people, but rather himself being listened and what it felt at the time. Being conscious demands having a stream of conscious "presents" which continuously remain related to previous "presents" becoming "pasts". Again, transcendental experience of pure consciousness (beyond space and time) provides a special counter-example to this argument outside the domain of ordinary states of consciousness.

Gennaro thus concludes that mental states (e.g. beliefs) per se do not require self-consciousness. However, episodic memories, self-directed ( de se) thoughts and self-modification of behavior are needed for consciousness and they entail some form of self-consciousness.

The acceptance of Gennaro's structurally sound theory depends on how one understands his notion of meta-psychological thought. The nature of meta-thought remains, it seems, somewhat open question, at least partly because of unsolved qualia problem.

Conclusions and Outlook on the qualia problem

In the first part, system-processual backgrounds of consciousness in various biophysical "media" were presented. Several mathematical neuro-quantum analogies were listed. It was argued that they are a result of a similar collective dynamics in neural and quantum networks. The most important analogies were the following: The reconstruction of a neuronal pattern (the recall of a pattern from memory) is analogous to the so-called "wave-function collapse". In the neural case, from a "mixture" of neuronal patterns one pattern alone is made clear in the system of neurons ("consciousness"), all the others "die out" there and remain stored in the system of synaptic connections (in memory) only. In a quantum system, the wave-function "collapses" from a superposition of eigen-wave-functions to a state which can be described by a single eigen-wave-function, all the others are latent, enfolded in the implicate order.

These processes provide a processual background for consciousness, and for bi-directional consciousness--memory transitions as well as for unconsciousness--consciousness transitions. It was emphasized that some multi-level coherence of various fractal-like complex systems is necessary for consciousness. Such flexible and fuzzy multi-level processes constitute an alternative basis for aconceptual experience including consciousness.

In the second part, the theories of representations (following connectionist line), intentionality (following Brentano's tradition) and consciousness together with self-reflective consciousness, i.e. self-awareness (starting with Brentano, then following Rosenthal and Gennaro) were discussed.

The I (ego) as a proposition-like self-representation can be treated as an attractor or gestalt of the highest order (as far as its phenomenal character could be neglected - strictly it could not). Deep meditators can transcend their I's (egos) as soon as the corresponding global attractor is erased (see also Deikman, 1996).

The hard problem of qualia remains unsolved. Some physicists argue that "qualia are embedded in space-time"; some cognitive neuroscientists say that this is a quasi-problem; some spiritually-oriented scientists argue that qualia are not problem at all, because they are the basic thing, and all other things must be explained starting from this experience. It is obvious that humans experience qualia, but for now they do not understand their nature at all. On the other hand, the system-processing background of consciousness is known increasingly better using theory of neural networks with their hierarchies of attractors, and roughly similar processes in quantum networks.

At the end, we will consider a characteristic attempt of a non-reductionist philosopher (N. Nelkin) to discuss (self-)consciousness with phenomenal qualia. Phenomenal consciousness is a mental state with subjective feelings, i.e. something it is like to be in that state. It is essential for sensations like colourful visual and soundful auditory experiences, kinaesthetic feelings, pains. On the other hand, there is nothing it is like to be a conscious thinking and feeling (Nelkin, 1996). So, if one is aware of his phenomenal quale, this is because one has a second-order, noninferential, proposition-like awareness that one is in that first-order or phenomenal mental state. The awareness of qualitative states (making these qualia conscious) is an apperceptive second-order thought or even judgement, and as such this second-order state is more important for personality and self-identity than qualia (first-order phenomenal states) themselves. Nelkin (1996) distinguishes three types of conscious states:

1. first-order proposition-like representational state (C1), 
2. high-level, neurally-based, image-like representational state with phenomenality (CS),
3. second-order, direct, noninferential accessing and proposition-like representation of some C1 and of some CS (C2).
CS is already a kind of phenomenal awareness. but C2 is a distinct apperceptive awareness (a real self-awareness).

Consciousness, in spite of its relative primitivity, un-analysability, un-effability and phenomenal unity (Kihlstrom, 1993), has several aspects, even unconscious implicit ingredients. Many beliefs we are apperceptively conscious of do not seem tied to phenomena. For example, one can be conscious that one believes tomorrow is Friday, but no set of phenomena is required for that consciousness (Nelkin, 1996). Secondly, phenomenal experience may alter because of physiological change. Patients with implanted new lenses complain that their colour phenomena are different.

It seems that non-phenomenal (i.e. "access-conscious", "purely" informational) states are those which, when directed toward C1 or CS, make the subject aware of them (C1 or CS). Nevertheless, in spite of not being universal and omnipresent, phenomenal qualitative states remain the central mistery (Hubbard, 1996). What really is the hardest problem is not awareness or self-directed awareness itself, but the qualitative nature of (self-)awareness. That is to say, with a theory of higher-order propositional-like thoughts we can somewhat trace the (cybernetic, recursive, iterative) essence of self-awareness, but we have no way to explain their phenomenal character. In spite of the fact that Nelkin tried to consider qualia a little bit more than Gennaro and Rosenthal, he did not succeed - as did not any other consciousness researcher (Flanagan, 1992; Hendriks-Jansen, 1997).


-Amit, D. (1989): Modeling Brain Functions (The world of attractor neural nets). Cambridge Univ. Press, Cambridge.
-Baars, B.J. (1997): In the Theater of Consciousness. Oxford Univ. Press.
-Banks, W.P. (1996): How much work can a quale do? Consciousness & Cognition 5, 368-380.
-Bohm, D. (1980): Wholeness and Implicate Order. Routledge and Paul Kegan, London.
-Bohm, D. & B. Hiley (1993): The Undivided Universe (An ontological interpretation of quantum theory). Routledge, London.
-Brentano, F. (1973): Psychology from an Empirical Standpoint. Routledge & Kegan Paul, London. (German original: Psychologie vom empirischen Standpunkt, 1874.)
-Burnod, Y. (1990): An Adaptive Neural Network: the Cerebral Cortex. Prentice Hall, London.
-Chalmers, D.J. (1995): The puzzle of conscious experience. Scientific American (December), 62-68.
-Davies, M. & G.W. Humphreys (Eds.) (1993): Consciousness. Blackwell, Oxford.
-Deikman, A.J. (1996): "I" = awareness. J. Consciousness Studies 3, 350-356.
-Dennet, D.C. (1969): Consciousness and Content. Routledge & Kegan Paul, London.
-Flanagan, O. (1992): Consciousness Reconsidered. MIT Press, Cambridge (MA).
-Gennaro, R.J. (1995): Consciousness and Self-consciousness (A Defence of the Higher-Order Thought Theory of Consciousness). John Benjamins, Amsterdam / Philadelphia.
-Goswami, A. (1990): Consciousness in Quantum Physics and the Mind-Body Problem. Journal of Mind and Behavior 11, 75-.
-Haken, H. (1991): Synergetic Computers and Cognition. Springer, Berlin etc.
-Hameroff, S.R. (1994): Quantum coherence in microtubules: a neural basis for emergent consciousness? J. Consciousness Studies 1, 91-118.
-Hameroff, S.R.; A.W. Kaszniak & A.C. Scott (Eds.) (1996): Toward a Science of Consciousness - Tucson I. MIT Press, Cambridge (MA).
-Hendriks-Jansen, H. (1997): Information and the dynamics of phenomenal consciousness. Informatica 21, in press.
-Hubbard, T.L. (1996): The importance of a consideration of qualia to imagery and cognition. Consciousness & Cognition 5, 327-358.
-Kafatos, M. & R. Nadeau (1990): The Conscious Universe. Springer, New York.
-Kihlstrom, J.F. (1993): The continuum of consciousness. Cognition & Consciousness 2, 334-..
-Lockwood, M. (1989): Mind, Brain and the Quantum. Blackwell, Oxford.
-Marcel, A.J & E. Bisiach (Eds.) (1988): Consciousness in Contemporary Science. Clarendon Press, Oxford.
-McClelland, J.L.; D.E. Rumelhart & PDP research group (1986): Parallel distributed processing (Explorations in the Microstructure of Cognition) - vol. 1: Foundations / vol. 2: Psychological and Biological Models. MIT Press, Cambridge (MA).
-Nagel, T. (Ed.) (1993): Experimental and Theoretical Studies of Consciousness. John Wiley & Sons, Chichester etc., 1993 (in particular: M. Kinsbourne: Integrated cortical field model of consciousness).
-Nelkin, N. (1996): Consciousness and the Origins of Thought. Cambridge Univ. Press, Cambridge.
-Newman, J. (1997): Toward a general theory of the neural correlates of consciousness. J. Consciousness Studies 4, 47-66 (part I) and 100-121 (II).
-Oakley, D.A. (Ed.) (1990): Brain and Mind. Methuen, London.
-Penrose, R. (1989): The Emperor's New Mind (concerning Computers, Minds, and Laws of Physics). Oxford Univ. Press, London.
-Penrose, R. (1994): Shadows of the Mind (A Search for the Missing Science of Consciousness). Oxford Univ. Press, Oxford.
-Peretto, P. (1992): An Introduction to the Modeling of Neural Networks. Cambridge Univ. Press, Cambridge.
-Perus, M. (1995a): All in One, One in All (Brain and Mind in Analysis and Synthesis). Ljubljana, DZS (in Slovene).
-Perus, M. (1995b): Synergetic Approach to Cognition-Modeling with Neural Networks. In: K. Sachs-Hombach (Ed.): Bilder im Geiste, 183-194. Rodopi: Amsterdam, Atlanta.
-Perus, M. (1995c): Analogies between quantum and neural processing - consequences for cognitive science / In: P. Pylkkanen, P. Pylkko (Eds.): New Directions in Cognitive Science. Finnish AI Soc., Helsinki (115-123).
-Perus, M. (1996): Neuro-quantum parallelism in mind-brain and computers. Informatica 20, 173-183.
-Perus, M. (1997a): Multi-level synergetic computation in brain. Advances in Synergetics 9, in press.
-Perus, M. (1997b): Neuro-quantum coherence and consciousness. Noetic J. 1, in press.
-Perus, M. (1997c): Common mathematical foundations of neural and quantum informatics. Z. Angewandte Mathematik und Mech., in press.
-Perus, M. (1997d): System-Theoretical Backgrounds of Mystical and Meditational Experiences. World Futures: J. General Evolution, in press.
-Perus, M. (1997e): Neural networks, quantum systems and consciousness. Science Tribune, http://www.iway.fr/sc/tribune/articles/peru1.htm
-Rakovic, D. & Dj. Koruga (Eds.) (1996): Consciousness. ECPD, Beograd.
-Sajama, S.; M. Kamppinen & S. Vihjanen (1987): A Historical Introduction to Phenomenology. Croom Helm, London etc.
-Searle, J.R. (1993): The problem of consciousness. Cognition & Consciousness 2, 310-.
-TucsonII (1996): Toward a Science of Consciousness. Consciousness Research Abstracts (JCS).
-Ule, A. (1997): Consciousness and process. Informatica 21, in press.