DynaPsych Table of Contents


 

 

Machines as part of human consciousness and culture

 

Paper presented in an International Symposium “Machine Consciousness”, 13.7.2001, Jyvaskyla, Finland

 

Timo Jarvilehto

Department of Behavioral Sciences, University of Oulu, Finland

E mail: tjarvile@ktk.oulu.fi

 

 

 

Introduction

 

It seems that the unprecedented progress in the study of the brain and consciousness combined with the development of behavioral robotics and the study of artificial intelligence has during the last years produced many press releases which inform the public that the scientists are close to creating truly intelligent and conscious machines who can talk with their user and even grasp his feelings. Several researchers maintain that they are close to building of an artificial mind, and that it will take only perhaps a few decades until we can witness the advent of the first autonomous humanoid robots.

 

As a matter of fact, at the present, there are three fields of research which seem to have greatly advanced during the last years, and which share the optimism of quick solutions to many ancient practical and theoretical problems. These are genetics, neuroscience, and new kind of robotics or artificial intelligence. Almost daily new shocking findings are reported in all these fields. The determination of human genome, the mapping of  the psychological functions in the brain, and the development of intelligent machines are claimed to have profound impact on everyday-life of human beings in the future.

 

It is, however, questionable to what extent such enthusiastic reports and press releases are only expressions of the fantasy of the researchers, or how far they are just motivated by the necessity of  satisfying the demands of the sponsors, and getting new funding.

 

In this paper I would like to express a word of caution: the basis of such press releases is not at all so solid as it sounds. In fact, all of these fields share some questionable basic assumptions, and especially the present interpretations of the human-like abilities of machines are based on a number of conceptions about the origin and development of consciousness, and basic characteristics of the robots which are untenable, and based on questionable philosophical ideas.

 

Now, the problems in the theoretical account of consciousness or human-machine relation are closely related to the basic conceptual problems of psychology as a science.  In fact, one of the greatest mistakes of psychology, since the beginning of experimental psychology during the 19th century, was the conviction that philosophical problems are no longer relevant in psychology, or that they have been adequately solved. This led to experimental work that ignores its own basic philosophical assumptions, and which is based on the idea that collection of experimental results alone will somehow provide an adequate theoretical understanding of the subject matter.

 

This situation is at the present reflected in the research on robotics, and neuroscience and consciousness. For example, the great majority of the modern neuroscience follows still the old phrenological model of the 19th century when trying to locate mental functions in different parts of the brain on the basis of electrical recordings or changes in the blood flow. Almost two hundred years ago, Gall and Spurtzheim proposed that the cortex of the brain contains “mental organs” which grow when the corresponding mental faculties are exercised, thus causing local bumps on the skull. According to them the mental abilities could then be easily determined by measuring these bumps. In the modern neuroscience the measurement of  bumps is replaced by more refined recordings, but the basic idea is the same: mental activity can be located to parts of the brain.

 

This is also the basis for the claim that it is possible to construct a conscious machine or robot. If mental activity and consciousness are produced by the brain, then it should be possible to build an artifact which can simulate the complexity of the brain to the extent that it will have consciousness.

 

In the present paper I claim that the mainstream neuroscience is on the wrong track when it tries to locate mental functions in different parts of the brain, because mental functions or consciousness are not located in the brain. In respect to robotics, it follows from this point of view that it is not possible to build a conscious robot.

 

 

The theory of the organism-environment system

 

In the following I will deal  with these problems on the basis of a theoretical approach which I have developed during the last ten years, and recently described in a few Finnish books, and a series of English articles (Jarvilehto, 1998a,b,c; 1999; 2000). The approach starts with the claim that the conceptual confusions in psychology, as well as in neuroscience and robotics, are due to the postulate that the organism and the environment are two separate systems.  This is our conventional common sense point of view: here am I and here is the environment. There something inside me, as mental activity, feelings, etc., and outside is the environment in which such factors are located which influence my behavior.

 

However, it is questionable if this common sense point of view can serve well enough as a basis for scientific analysis of human behavior. This is therefore that this assumption leads to the ascription of physical, biological, mental, and social concepts to the organism, or to its brain, whereas the environment is conceived of simply as a physical system consisting of stimuli.  How could then such diverse systems interact? If mental activity is in the brain, is it only neural activity? How is social action possible if we are from the beginning captured within our scull?

 

The conceptual obscurity of the common sense point of view on two interacting systems, organism and environment, lead to the question whether the basic assumption, the separation of the organism and environment, is, finally, useful at all. In fact, we all know that no organism can survive without an environment, and if we look somewhat closer, we can easily see that no exact border can be shown between the organism and environment. Take breathing, for example. What is a respiratory system without air, and where is the border between “inner” and “outer” air. The same is true of metabolism. In fact, in any living system the organism and environment are inseparable.

 

In my approach, which I call  “theory of the organism-environment system”, I take this fact seriously, and start with the postulate that the organism and environment belong together, they form a unity, and cannot be separately studied in respect to psychological processes or mental functions. Hence,  a psychological process involves always the whole organism-environment system. The basic unit of psychological investigation is not a psychological process within the organism, but a process in the whole organism-environment system. This is the system in which “psyche” is realized and if this system is divided into smaller parts, we lose the psychological object of study.

 

Therefore, according to this approach -- and in contrast to the mainstream neuroscience -- one doesn’t find mental activity, psyche, or consciousness within the organism (in its brain or stomach), as little as they can be found in external stimulation.  Mental activity is not activity of the brain, although the brain is certainly an important part of the organism-environment system.

 

This approach offers the possibility of defining psychological concepts without giving them an independent existence in the form of mystical mental entities, as exemplified by psychoanalysis, or reducing them to neurophysiological concepts or computation in the nervous system, as it is typical of the mainstream cognitive science.  For example, perception is a concept which characterizes the results of joining of certain parts of the environment to the system, memory refers to the structure of the system, and in the case of learning  we are dealing with the widening and differentiation of the organism-environment system. As parts of such processes the brain and body play an important role, but only in so far as they contribute to the organization of the whole organism-environment system.

 

 

Consciousness as co-operation

 

Now, we may look how we can understand the concept of consciousness on this basis, and what follows then in respect to robotics.

 

In the traditional discussions of consciousness, there are usually two features which seem to be in contradiction:

 

The first one is the apparent individuality of consciousness: When being conscious I know my own feelings, and I may always doubt whether other human beings have such feelings. Thus, consciousness is conceived of as a private and personal faculty hiding somewhere in the abyss of the soul or the brain.

 

The second feature seems to be the opposite to the former one, namely the commonality of the conscious experience: Consciousness means the possibility of reporting one’s experience and of co-operating with other people. Consciousness is common knowledge; when being conscious I can share my experiences with other people, and be convinced that the others understand what I say. In fact, even the word consciousness has its origin in Latin words “con” and “scire”, the former one having the meaning of “together” and the latter one “to know”, it is: to know together. However, if consciousness was only individually present and a private faculty, how might then the sharing of experience and common knowledge be possible?

 

In the traditional discussions of consciousness, this contradiction is difficult to reconcile, because the methodology of thinking is ahistorical, and absolutizes the final product of evolution, the conscious subjective experience of the human being. It was already Spinoza in the 17th century, who criticized his contemporary philosophers by accusing them of the wrong order of philosophizing: that one should not begin contemplation with the final product, with one’s own limited ideas, and generalize it for the whole of nature, but one should rather first reflect the properties of nature, and contemplate the possibilities of nature of producing this final product. Such a methodology entails that no phenomenon can be understood without taking into account its evolution and history.

 

The organism-environment theory follows this principle when examining the development of consciousness. According to this approach consciousness did not appear in the form of the present human consciousness or subjective experience at once, but it has a long evolutionary history.

 

I have proposed that the critical feature in the advent of human consciousness was the development of a co-operative organization, i.e. a joining together of single organism-environment systems which produced something genuinely new as a result, namely a common result. The specific feature of this result was that any individual alone could not achieve it, and it could be varied under different life conditions by the reorganization of the whole community.

 

Furthermore, from the point of view of the life process, the common result was useful for the participants and for the development of the community as a whole. Consequently, consciousness is not primarily regarded as a characteristic of an individual, but its origin is seen in an organization of several organism-environment systems acting together for a common result. Consciousness is thus – in its most general form – a characteristic of a social organization.

 

However, the advent of consciousness was also the advent of the individual and the “self,“ or personal consciousness, which means for the individual the possibility of reflecting on the results of his own action and experience. Hence, consciousness is not only shared and common, but also personal.

 

However, this personal consciousness is not something residing “inside“ the individual, but it consists of the personal participation of the individual in the results of the co-operation. The self does not lurk hidden somewhere between the perceptual input and motor output. Reflection upon one’s own action is neither “self-referential“ activity, in the sense that an “I“ as a subject would look at an “I“ as an object, but it happens as a relational process in the co-operative community.

 

Thus, consciousness cannot be located in the brain, or in any parts of the individual. A brain - let it consist of protoplasm or be it an artificial neurocomputing machine - cannot be conscious as such, as little as a robot can be. The brain is only one organ of the body (and even anatomically difficult to define exactly; in fact, there are no means to separate the nervous system from the body). Locating consciousness in the brain or in the machine leads to questions which cannot be answered, because for consciousness to exist, we need much more than the brain or the machine alone. Furthermore, the localization of conscious experience in the head is based on the mistaken conception of the subject of the conscious action.

 

The subject of consciousness is not the body, brain or a neuron, but an "I", a person that may not be defined on the basis of the structure of his brain, but rather as a point of intersection in a net of social relations. The "I" is not an entity in the same sense as a body, but a systemic relation. The thinking and conscious subject is not a piece of flesh, but a set of relations and processes in the social system. Such relations create a person who is distinct from all other personalities precisely through those specific relations.

 

Thus, a person may be defined as a point of intersection of all social relations, the body being the spatial location of the point of intersection; the concept of person contains all those parts of the world and relations which are important for the life process of the individual. These parts are the basis of the identity of the individual, his self. Nobody may have an identical personality or self to somebody else, because it is not possible to have the same social relations as somebody else. It is this fact that gives every individual his uniqueness. This is also the reason why a robot may never become a person.

 

In the social system, the different individual aspects culminate in the common result, and the participation in the common results widens the action possibilities and the personal consciousness of the individual. The development of personal consciousness is therefore in direct relation to the possibility of using the common results in one’s own action.

 

There is no individual or conscious experience without the common co-operative organization. An individual gets his characteristics and his conscious experience gets its contents through the relations in the community. Neither may exist outside these relations, even in the case when the individual acts seemingly “alone“, separated physically from other people, as a hermit, for example. When a human being develops his social relations, he is no more alone in his life – even if he is located in a desert island. Sociality does not depend on physical distance or time as such; we cannot measure loneliness in centimeters or minutes!

 

The development of individuality does not abolish the social character of consciousness. On the contrary, the content of consciousness must be learned in relation to the given culture and its norms. This means that a human being may have consciousness typical only of the human species. The content of any conscious experience is tightly bound with the social situation in which its adequacy may be estimated and evaluated. The content of a conscious experience must comply with the norm; otherwise it is called a disturbance or hallucination. The latter is typical of many forms of  phobia, for example, in which a certain emotion is present in a situation that is normally not associated with this kind of feeling. Thus, the connotation of the conscious experience is primarily ethical, it has always relation to the norms, and therefore it can exist only within the organization which has created these norms.

 

The social character of consciousness may be seen also in the fact that the deepest (and traditionally the most “private“) conscious feelings and thoughts are those which are most specific for the whole human species. As Carl Rogers succintly stated: “The most personal is the most general“. Thus, there is not much evidence to support the conception of consciousness as a fundamentally private experience, confined only to a single brain or even a part of the brain. As a matter of fact, in the traditional approach to consciousness, its truly social nature cannot be detected, because the experiencing subject is detached from the social situation already before the consideration starts.

 

 

Consciousness in the machine?

 

If consciousness and conscious experience is so tightly bound with the human co-operation and culture, what about the consciousness in machines? Can machines develop human-kind of consciousness?

 

First of all, a machine is built by human beings as an extension of their own actions. Machines are constructed for a certain purpose or task; they incorporate some abilities of the human beings, but in an exaggerated form. Thus, digging a hole is much more effective with a spade than bare hands, and many calculations are carried out much faster  by the computer than by a human calculator. We have constructed these machines as parts of the human culture in order to achieve the results we need, much more effectively than without the machine.

 

Thus, in this respect a machine does not have any independent existence, but it exists as a machine only as a part of the human action and culture. If there were no human beings there were no machines either --  even if the machines would somehow outlast the human existence. A computer without somebody who would interpret the printed page or result of calculation would be no computer, but just a piece of trash without any meaning.

 

But perhaps it could be possible to construct a machine which is not specially dedicated to a certain task, but which could develop itself its own tasks, and which could then be taught to be like a human being. In the research on so-called autonomous agents there are strong attempts to develop this kind of robots, and some simple prototypes already exist. They are usually constructed of modules which are not strictly preprogrammed and which can create their own contacts to the environment on the basis of their own actions. Could some more developed forms of such robots create consciousness?

 

This is, of course, a fascinating idea. The creators of  the robot Kismet, for example, constructed in MIT, believe that their robot has rudimentary experiences, sensations, and social skills. The lab has even hired a Lutheran minister as a theological adviser for the case that difficult religious or ethical problems will be encountered in the development of Kismet or its brother Cog.

 

However, there are good reasons to believe that even such robots would never become conscious in the human sense. Some of these reasons would be the following:

 

First, the development of consciousness presupposes co-operation which may happen only between structurally similar beings, and second, the development of human consciousness presupposes participation in the human developmental history and culture.

 

It is essential for the development of consciousness that common results may be achieved, which are useful for all participants of the co-operation. This means that the results of common action must be such that they, at least in some respects, fit to the organization and developmental history of the participants. In other words, they must support the development and satisfy some needs of the participants. Can machines come to terms with such co-operation?

 

Here it may be useful to have a closer look at what it means to construct a machine.

 

When we build a robot it is not enough to put together its parts, but we must also have some kind of concept of its environment – not necessarily an exact one, as it is in developing “autonomous agents”. However, even in the latter case the possible environmental features/parts are defined roughly in the construction of the parts. We must construct its connections to the world which means that we should know what kind of parts will be important for its functioning. Therefore, also here the environment of the robot is predetermined, though it may vary more than with the old robots.

 

Hence, if we would like to construct a truly “autonomous” robot, how should we shape its structure and environment?

 

The organism-environment theory states that the parts of environment belonging to the organism-environment system are parts which are defined by the structure of the system. Physical description of a living system can never be a complete description, not only because physics has nothing to say about life as such, but also because the parts of the system are not selected according to the physical laws, but on the basis of the living structure.

 

Thus, when we describe the environment of an organism we do not really describe the parts belonging to the living system, but we describe these parts as separated from the system and joined to the system of the observer. Therefore, we cannot observe directly the subjective environment of the organism; we may describe only our own environment and relate this to the observed behavior of the organism. When studying the behavior of the organism we may then see how the organism relates to such parts of the environment which, in fact, belong to our own system.

 

This consideration may be applied to the robots. When we give a description of the environment of the robot, we do not really describe those environmental parts belonging to the robot-environment system in the strict sense, but we give the description of certain parts of the world from the human point of view.

 

From this follows that the constructed environment of the machine is a human environment as it is presented in language (in diagrams, models,  pictures, etc.). In the construction of machines, we explicitly separate machine and environment, and use language in the description of the parts of the machine and the parts of the environment which are related to it. We as designers and constructors anticipate what would happen with a certain set of elements to the machine in the environment we know.

 

However,  it is very possible that from the point of view of the machine such features of the world exist which are not (and even cannot be) parts of the environment of its planner. In fact, this could be the reason why all machines eventually break down. As it is impossible to take into account the whole universe when building a machine, there are always some unknown factors which do not fit into the constructed structure of the machine.

 

Technology is something that we construct on the basis of words and diagrams. Therefore, there is a very crucial difference between the living systems, and technology modeling living systems. The brain, for example, is not a technological device -- even it is often modeled as such -- because its functioning does not follow any human-made rules, but its own intrinsic living structure.

 

As a matter of fact, as the words or diagrams are not the same as what they symbolize, we can never build life according to the verbal descriptions. If we follow the verbal description when making an artifact, then we do not create a living thing, but a model of a living thing, something that superficially imitates a living thing. We can construct an artificial cell which looks and acts very much like a cell, but which obeys human rules, and not the rules of the living structure. Therefore, such a cell is and stays as an automaton, as a machine, built for human purposes.

 

In fact, every description of life is a metaphor created by humans, and it touches only some aspect of life. In constructing artificial life we have the problem that we try to build this metaphor, which results in something that is precisely “artificial”. Life cannot be exhaustively described, and even if it could this description is not identical with life. We cannot create life by following linguistic descriptions. Life can be created only by living, not by imitating life.

 

The problem with imitation is that it does not reach the content, but only an outer shell. When you imitate eating you will not get satisfied; when you imitate a genius you will always remain an average man. Construction of functioning machines according to the instructions is possible, because the machines are from the beginning constructed by humans. Man himself, however, is a ”construction” of life and nature.

 

Those who think they can build a living cell on the basis of its description are in the same position as those who think that a poem is the same thing as the experience which its reading creates. A poem as a text does not contain any experience, as little as the description of a cell contains life. Writing a poem is a process in which the world is changed so that somebody by joining to this changed part of the world (the text) can have an experience, get reorganized. For a poet the use of a poem by the reader is always a mystery. He doesn’t know exactly what will happen as little as the reader knows what happens when he understands the poem.

 

This said, I must add that I don’t mean that it would be impossible to imitate some features of life or build robots which would not act in a very human-like fashion, and here, it seems, we are getting more and more skillful. However, no matter how much we will develop in this respect, the machine is an imitator, and the imitation is not life itself, but precisely imitation. A human-like computer may imitate conscious human behavior, but it may never share his developmental history; it is just a part of the human being and his culture.

 

Cooperation of  the human beings, based on communication, is possible, because we share, in addition to language, structurally such aspects of the environment which we cannot describe in language. These aspects are such that we cannot share with the robots and therefore it is impossible to develop communicative co-operation with them. The robots cannot authentically participate in human co-operation, and therefore they stay always as “slaves” or tools for human purposes.

 

Thus, we never really know in the form of conscious knowledge what all of us know in this basic structural sense. We “know” always more in immediate action than we can describe with words. In language we can express only the results of our action, not the action itself and its structure. The description of our action is also action; we cannot jump outside of our life process and separate it as a whole to an object of our description.

 

As I pointed out, consciousness is regarded in the organism-environment theory as an aspect of a system consisting of several organism-environment systems, which is directed towards common results that are useful for the whole cooperative system.  The machine is something that we build to serve this process, and therefore it is just a part of the system.

 

But might robots communicate with humans and even with each other? Usually communication is held as an important criterion for the existence of consciousness.  Perhaps a robot could be developed  which could speak and understand human language, and which had a concept of its own capabilities, preferences, and taboos. Can such a machine be a truly “autonomous agent” using language for its own purposes?

 

My answer is no, because, even in this case, “communication” is an illusion. We may, of course, build robots which may use human voice or send messages to each other, and behave appropriately in relation to these messages, but this has very little to do with the human use of language.

 

Communicating" robots are designed for human purposes, and they will "communicate" only as far as this kind of action fulfills some human plans. If they start to do things of their own, we say that they have a malfunction. A creative machine is a broken machine. The use of “language” by the robots (even by complicated "learning" robots) may be compared to how a typewriter uses the language. Every key on the keyboard is able to produce a letter, but it depends on the human user whether such a communication makes any sense.

 

The conception of communicating robots is based on the idea that the use of language means only the ability to form syntactically correct sentences in response to presented questions. Thus, language is seen only as a set of symbols which can be used without understanding if one only knows the syntactic rules. Along this line, writing or speaking is conceived only as a process of production of words in correct order. This is the gist of the so-called Turing test which states that a computer has a mind if you can no more differentiate it from a real person in a conversation.

 

However, from my point of view such a conception of language is deeply mistaken. The purpose of the language is not to deliver syntactically correct sets of symbols, but create co-operation. The words as such do not carry any meaning, but they are only suggestions for cooperation. It is the production of the cooperation, joining of organism-environment structures (a process, for which no words exist), which is critical in the development and use of language.

 

Communicative cooperation means that the participants of communication are able to change their structure so as to join their activities in the achievement of the common result which is in some sense new for all participants. The robot can certainly be constructed, which may simulate such cooperation, but even in the best case its activity resembles that of a slave who must act according to the rules of his master.

 

Communicative co-operation presupposes a common structure and common history of development. Only in this case can the participant be sure that the others have similar understanding of the world as he does, and he is able well-enough to anticipate their actions. A human-made machine can never share the three billion years of development of life on earth, and the cultural heritage of the humanity. It is and stays as an artifact with no history.

 

Thus, every machine is an extension of the human abilities. Therefore, we could, of course, say that a robot is "conscious", but only in connection with a human being, as a part of his consciousness. The question of consciousness in machines is similar  to wondering whether my legs are conscious,  as they can bring me so well to the place I want to go to.

 

 

Conclusion

 

To sum up, the research on robotics and artificial intelligence is on a wrong path when trying to develop conscious machines, similarly as the modern brain research is faced with an impossible task when trying to find special areas for consciousness in the brain. This doesn't mean denial of the importance of the brain or the nervous system when consciousness is studied, or that machines could not simulate conscious acts. However, locating consciousness in the brain or in the machine leads to questions which cannot be answered, because for consciousness to exist, we need much more than the brain or the machine alone.

 

Thus, a machine or a brain, as such, may never have consciousness of their own. Robots cannot exist as autonomous beings, because their existence as robots is bound to the human culture. They are neither "interested" in co-operation or in communication, as little as spades are interested in digging holes or computers in the content of their calculations; they do this only when they are programmed and used by the human beings.

 

Furthermore, there are also deep ethical issues related to such endeavors. If we start to humanize machines then it easily follows that we start to mechanize human beings. Here genetic engineering and development of  robotics seem to be just two sides of the same coin. Already now there are strong efforts towards genetic manipulation of  babies, which reflect the attitude that a baby is just a doll which serves the needs and satisfaction of its parents, a robot who is no more born, but produced. From the other side comes then robotics which develops more and more human-like dolls for the play of the couples which cannot get babies, as happened in the recent movie of Spielberg. Shall we then gradually be treated only as some kind of robots, which should serve their manufacturers?

 

However, I think we shouldn’t be too afraid of such a scenario – although one should certainly be worried about the attitudes behind such ideas -- because  in my opinion the creation of conscious machines will never be realized. The idea that conscious robots will some day run on our streets, is simply based on a faulty philosophy and wrong conceptions of the human being and his consciousness. Human consciousness is based on long developmental history and co-operation with the other human beings. Therefore, it is impossible to create consciousness artificially. Machines, on the other hand, are human constructions which may achieve any kind of complexity, but they will always stay as parts of the human consciousness and culture.

 

References:

 

·        Jarvilehto T (1998a) The theory of the organism-environment system: I. Description of the theory. Integrative Physiological and Behavioral Science, 33, 321-334. URL: http://wwwedu.oulu.fi/homepage/tjarvile/orgenv1.pdf

·        Järvilehto T (1998b). The theory of the organism-environment system: II. Significance of nervous activity in the organism-environment system.  Integrative Physiological and  Behavioral Science, 33, 335-343. URL: http://wwwedu.oulu.fi/homepage/tjarvile/orgenv2.pdf

·        Jarvilehto, T. (1998c). Role of efferent influences on receptors in knowledge formation. Psycoloquy, 9 (41). URL: http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?9.41

·        Jarvilehto, T. (1999). The theory of the organism-environment system: III. Role of efferent influences on receptors in the formation of knowledge. Integrative Physiological and Behavioral Science, 34, 90-100. URL: http://wwwedu.oulu.fi/homepage/tjarvile/orgenv3.pdf

·        Jarvilehto, T. (2000). The theory of the organism-environment system: IV. The problem of mental activity and consciousness. Integrative Physiological and Behavioral Science, 35, 35-57. URL: http://wwwedu.oulu.fi/homepage/tjarvile/orgenv4.pdf