DynaPsych Table of Contents


 

 

Commentary on

“Machines as part of human consciousness and culture”

by Timo Jarvilehto

 

Ben Goertzel

ben@goertzel.org

 

 

In his essay on machines as part of human consciousness and culture, Timo Jarvilehto presents an interesting point of view, and he presents it clearly and systematically.  However, it is my belief that his point of view is wrong.

 

I find his arguments quite convincing as a critique of standard, mainstream approaches to AI and robotics.  But I don’t find them convincing as a proof that no approach to AI – machine intelligence – can possibly succeed.

 

I agree with him that mind is largely about experiential interactive learning, about learning to perceive and act cooperatively with other similar minds perceiving and acting in the same reality.  A mind must have autonomy, it must have spontaneity, so that it can direct its own interactions with other minds freely using a combination of rules and randomness, thus building up its mind.  “Mind” is not just a collection of patterns in a  brain, it’s a collection of patterns emergent between a brain and a world, including other minds.  This point of view is developed in one respect in my essay on Artificial Selfhood; the Webmind AI Engine is one (not yet complete) attempt to embody this lesson in a real AI system.

 

Current computer programs are very far from satisfying these criteria, from being social, experientially interactive beings.  But this does not disprove the possibility of creating social, experiencing digital computer programs, any more than the existence of trees and cockroaches (which are made of basically the same molecules as people) disproves the possibility of creating social, experiencing beings out of organic molecules.

 

Machines and computer programs as they exist now are made to serve man.  As such, they lack the autonomy to develop real minds.  But we may make computer programs which are very different from the ones we have today – more mindlike, less predictable, more self-organizing and spontaneous.

 

This leads into some philosophical issues.  Is a computer program truly “deterministic” if, in practice, no human being can predict its outcome because its dynamics are so complex?  What if its complex dynamics interact with the dynamics of the physical world?  Is it still “deterministic”?  Not in practice, no more so than a human is.

 

It also leads into an ethical gray area that may become important in the future.  Microsoft Word is clearly a tool, and a superintelligent digital mind is clearly an autonomous being with rights, but there will be intermediate cases, where it will be difficult to judge whether one has an autonomous mind deserving of freedom, or something more like a digital animal (and animals, in our society, are considered to have many fewer rights than humans).

 

When I read sentences like Jarvilehto’s conclusion:

 

Human consciousness is based on his long developmental history and co-operation with the other human beings. Therefore, it is impossible to create consciousness artificially. Machines, on the other hand, are human constructions which may achieve any kind of complexity, but they will always stay as parts of the human consciousness and culture.

 

my reaction is as follows. 

 

1.      Yes, human consciousness evolved.  So could digital consciousness evolve via an artificial life approach – but it’s not clear that evolution is the only way to get intelligence and consciousness.  No argument has been presented as to why mind in principle can’ t be engineered, as an alternative.  I believe that engineered minds can self-organize and direct their own activity if they are built this way; their engineering then simply provides the initial condition.

2.      Machines may eventually become autonomous, just as children eventually grow up.  The nature of “machines” may change qualitatively as technology develops – indeed this can be expected.

 

In short, I believe the digital medium is just as flexible as the molecular medium.  Either one can be used to make tools, or to make autonomous beings.  The question of whether minds must evolve, or can alternatively be engineered, is a deep and interesting one, but an answer can’t just be assumed; this is a very deep mathematical and empirical question.

 

But Jarvilehto’s emphasis on the social and experiential nature of mind is an important one, and one that mainstream AI researchers would do well to heed more carefully.