Real AI

New Approaches to Artificial General Intelligence

 

 

Edited by:

Ben Goertzel and Cassio Pennachin

 

Chapter Authors:

Ben Goertzel, Cassio Pennachin, Pei Wang,

Eliezer Yudkowsky, Peter Voss,

Hugo de Garis, Joao Paulo Schwarz Schuler, Shane Legg,

Stephan Vladimir Bugaj, Vladimir Redko, Sergio Navege

[additional authors to be added]

 

 

 

 

1.   Purpose and Focus of the Book

The purpose of this edited volume is to give the first-ever coherent presentation of a body of contemporary research that, in spite of its integral importance to science (and arguably to humanity in general), is virtually unknown to the scientific and intellectual community.  This body of work has not been given a name before; in this book we christen it “Artificial General Intelligence” (AGI).  What distinguishes AGI work from run-of-the-mill “artificial intelligence” research is that it is explicitly focused on engineering general intelligence in the short term. 

Of course, “general intelligence” does not mean exactly the same thing to all relevant researchers.  But nevertheless, there is a marked distinction between AGI work and, on the other hand,

·       Pragmatic but specialized AI research which is aimed at creating programs carrying out specific tasks like playing chess, diagnosing diseases, driving cars and so forth (most contemporary AI work falls into this category)

·       Purely theoretical AI research, which is aimed at clarifying issues regarding the nature of intelligence and cognition, but doesn’t involve technical details regarding actually realizing artificially intelligent software (a lot of philosophy and cognitive science oriented AI work falls into this category, for instance Minsky’s Society of Mind work)

Current work proceeding in the AGI vein is, we believe, unjustly obscure, and deserves to be brought to the attention of the AI community, and also to the broader community of scientists and students in related fields such as philosophy, neuroscience, linguistics, psychology, biology, sociology, anthropology and engineering. 

Bringing the diverse body of AGI research together in a single volume reveals the common themes among various researchers’ work, and makes clear what the big open questions are in this vital and critical area of research.  It is our hope that this book will interest more researchers and students in pursuing AGI research themselves, thus aiding in the progress of science.

Our ideal would be to cover all major AGI research projects underway on the planet at this time.  Currently we have chapters committed from a number of leading researchers in the area, as well as some lesser-known researchers from the younger generation, with equally outstanding ideas.  We plan, over the next 2 months, to recruit chapters from the computer science community more broadly.  The current table of contents includes 8 chapters describing particular approaches to AGI, and 2 introductory and 1 concluding chapter giving general perspectives on AGI.  We consider it mostly likely that 2-5 further chapters will be recruited, describing additional researchers’ perspectives. 

There is a handful of known AGI research projects, whose architects have not yet committed to write chapters (Jason Hutchens’ HAL project, Stewart Grand’s Creatures project, for example).  If chapters on these projects are not contributed, then sections will be added to the introductory chapter overviewing this research work, so as to give the book a relatively comprehensive nature.

2.   Schedule

The initial set of chapter authors have committed to submit their chapters by Feb. 1, 2002.   It is anticipated that authors recruited during November and December 2001 will need an additional month to complete their chapters, so that all chapters will be received by March 1, 2002.  The introductory and concluding chapters will be finalized during April 2002, and during April authors will also be encouraged to comment on each others’ chapters, leading to a round of revisions.  After all this, we anticipate that the final manuscript will be ready for submission to a publisher roughly May 15, 2002.  

Of course, preparation of final photo-ready copy will take additional time, and can only be done after a publisher is located, since different publishers have different templates for copy preparation. 

Our aim is for a Fall 2002 publication for the book, which should be possible if a publisher is located either before the May 15 completion of the final manuscript, or very shortly thereafter.

3.   About the Editors

The chief editor of the book, Dr. Ben Goertzel, has published 4 research treatises, one trade science book, and one previous edited volume, as well as numerous research papers (for his CV, see www.goertzel.org/ben/newResume.htm).  He will be aided in the editing process by Cassio Pennachin, his long-time collaborator in research and software development. 

Ben and Cassio are the chief architects of the Webmind AI Engine, one of the AGI projects described in the book.  From 1998-2001 they worked together at the AI start-up firm Webmind Inc.  Ben was co-founder, Chairman and CTO; Cassio was VP of Research and Development.  Two of the initial chapter authors (Pei Wang and Shane Legg) are former Webmind Inc. R&D staff. 

Ben and Cassio’s collaboration now continues under the auspices of the new start-up Cognitive Bioscience LLC, which focuses on applications of AI to post-genomic bioinformatics.  Ben is also currently working at the University of New Mexico, as a Research Professor; and Cassio is serving as a part-time software engineering manager at the New York bioinformatics firm Proteometrics.

4.    Intended Audience

The book is intended primarily for academics, graduate students and advanced undergraduates.  The core audience will consist of

·       computer scientists and computer science students

·       academics and students in “cognitive science” affiliated disciplines such as psychology, philosophy, linguistics and neuroscience

We hope to also attract a secondary audience consisting of

·       Scientifically curious computer professionals

·       Educated laypeople who have read relatively sophisticated (idea-focused rather than biography-focused) popular science books like Order Out of Chaos (Ilya Prigogine), Frontiers of Chaos (Coveney and Highfield), Godel, Escher, Bach (Hofstadter), etc.

Some of the chapters will include some mathematical formalism, but each chapter will be readable by individuals without mathematical sophistication who are willing to skip over brief mathematical sections.

5.    Publishing Details

The length of the book can’t be predicted precisely at this stage, since the number of additional chapters to be recruited is still unknown, and since we have given initial chapter authors some liberty in setting the lengths of their chapters.   If each of the currently committed chapters were 25 pages in length, we would arrive at a book 275 pages in length (not counting front matter, index, and so forth).  So we project a final book length in the 250-400 page range.

Many of the chapter authors will include black and white illustrations with their chapters.  We assume that all illustrations will be embedded in the Microsoft Word files that contain the chapters themselves.  No color illustrations will be included.

 

6.   Contents

 

What is given here  is an expanded table of contents, which includes for each chapter the author’s abstract for the chapter. 

 

Naturally, this covers only the chapters that have already been committed.  There is still time for additional chapters to be submitted, representing additional researchers’ approaches to AGI.

 

Note that the current assemblage of chapters is more than enough to make an excellent book; the point of recruiting additional chapters is to ensure that individual doing serious AGI work but outside the editors’ professional circle of acquaintances, are given the opportunity to contribute as well.

 

 

Introduction, Ben Goertzel and Cassio Pennachin

 

A brief history of the AI field is given, with a focus on how, over time, AI has drifted from its original focus on the creation of real, general artificial intelligence, and become an interesting and valuable but far less ambitious branch of computer science, dealing with the creation and analysis of effective data structures and algorithms for carrying out highly specialized tasks.

 

An overview of current efforts in the “AGI” direction is then given.   The subsequent chapters of the book are briefly summarized, and some of the more salient similarities and differences between the chapter authors’ approaches are highlighted.  The relationship between the work described here and “mainstream AI” work is also emphasized, with special attention paid to cases where mainstream AI has provided tools that can be used as components within AGI systems.

 

 

Crucial AI Concepts, [all chapter authors]

 

AI has the same need for terminological precision as any other branch of science; and yet, it shares with philosophy a broad inter-theorist variation in the usage of key terms.  This problem cannot be entirely avoided, because ambiguous natural language terms like “mind”, “intelligence” and “reason” will only be fully clarified once AGI has become a mature experimental science.  But the problem can be palliated somewhat by paying careful attention to variations in usage among researchers.

 

In this chapter, we will review a number of key AI concepts (mind, intelligence, seed AI, brain, thought, reason, cognition, perception, emotion, imagination, creativity.  [This list may well change somewhat during the process of writing the chapter]), and briefly present the perspectives of each of the chapter authors on these concepts.  The goal is not to reconcile all the differing points of view – though this will be done whenever possible – but rather to make clearer when different authors are talking about the same thing, versus when they’re using the same or similar words to talk about slightly but significantly different things.

 

 

The Logic of Intelligence, Pei Wang (Computer Science Dept., Temple University)

 

Is these an "essence" of intelligence that distinguishes intelligent systems from non-intelligent systems? If there is, then what is it? This chapter suggests an answer to these questions by introducing the ideas behind the NARS (Non-Axiomatic Reasoning System) project. NARS is developed based on the opinion that the essence of intelligence is the ability to adapt with insufficient knowledge and resources.  According to this belief, the author designed a novel formal logic, and has coded it in a computer system.  Such a "logic of intelligence" provides a unified explanation for many types of

cognitive functions of the human mind, and is also concrete enough to guide the actual building of a general purpose "thinking machine".

 

 

The Webmind AI Engine, Ben Goertzel and Cassio Pennachin

 

This chapter reviews the Webmind AI Engine software system, versions of which have been under active development since 1997.   Webmind is intended as a distributed, self-organizing, experientially learning digital mind.  Conceptually based on a model of the mind as a system consisting of a large number of interacting, pattern-recognizing and pattern-forming agents, the crux of the Webmind design consists in the particular assemblage of agents that it incorporates.  Through years of theoretical research and prototyping, we have arrived at a specific collection of “agent types” representing both concrete data and abstract patterns, and carrying out key  mental functions such as perception, action, reasoning, planning, association-finding, and new concept formation.  And we have arrived at an efficient computational framework allowing agents of these types to interact in a distributed von Neumann computing environment.  This paper briefly reviews the structures, dynamics and configuration of the Webmind AI Engine, discusses some lessons learned from experimentation with earlier Webmind versions, and explains the plan for the future development, testing and teaching of the current Webmind version.

 

General Intelligence and Self-Improving AI, Eliezer Yudkowsky (Singularity Institute for AI)

 

Human intelligence evolved slowly over the course of millions of years.  Once we understand human intelligence as a complex supersystem of interdependent complex subsystems, we can construct new intelligences with abilities that present-day

humans lack - especially the ability of a mind to observe,  understand, modify, and recursively self-improve the design and implementation of its component subsystems.  Other abilities absent in present-day humans include a sensory modality for code,

use of general intelligence in low-level cognitive processes, and the ability to add and absorb new hardware and computational power.  Although focusing primarily on the problem of building the initial, prerequisite level of intelligence required for self-improvement to begin, this paper also discusses some of the implications of "seed AI", including the conclusion that there is a relatively short distance between human equivalence and transhumanity.

 

Fundamental Components of General Intelligence, Peter Voss (Adaptive Intelligence Inc.)

 

Identifying key aspects of general intelligence is a crucial step in engineering 'AGI' - and especially in designing 'Seed AI'.  Certain foundational components are essential for achieving the scope, flexibility, and autonomous leaning ability of human cognition. While these parts can be separated abstractly, they are highly integrated to form an adaptive, dynamic system. They include: adaptive sense inputs & preprocessing; an integrated, context encoding pattern database/network; adaptive output/ action channels; multiple learning mechanisms; various meta-cognitive functions. Conversely, several other functions usually deemed fundamental, irreducible elements - such as high-level logical thinking and language ability - are seen as naturally emerging (developing) from the more basic abilities.

 

Artificial Brains, Hugo de Garis (Computer Science Dept., Utah State University)

 

This chapter describes the "Utah-BRAIN Project", which is attempting to build an artificial brain, comprised of nearly 100 million artificial neurons. 3D cellular

automata based neural network circuit modules of some 1000 neurons each are evolved separately in a special evolvable hardware machine called a "CAM-Brain Machine", CBM, in about a second. The CBM also performs the binary neural signaling of an assembly of 64000 of such modules in real time. Human "Brain Architects" (BA’s), interconnect these separately evolved modules into artificial brain architectures in a

gigabyte of RAM to perform a large variety of functions.

 

As Yet Untitled Chapter, Shane Legg

 

No abstract submitted yet

 

Shane will outline his approach to AGI based on self-organization and algorithmic information.

 

Developing True Software Intelligence Inspired by Evolutionary and Biological Processes, Joao Paulo Schwarz Schuler

 

It is proposed that evolutionary and biologically inspired algorithms will possibly provide the shortest path to the development of a "truly intelligent software program".   Important aspects of the evolution of life and natural information processing systems are reviewed, and the definition of “truly intelligent” as meant by the author is clarified; then Daniel Dennett’s Creatures is introduced as one valuable framework in which to flesh out these ideas, and to create computational models of truly intelligent systems.  Two types of artificially intelligent creatures are presented: “Primary conscious creatures”, which can imagine and plan for the future, and  “higher order conscious creatures”, which can imagine about imagination, think about thinking, and develop physical and abstract devices. Evolutionary/biological techniques for creating both primary conscious and higher order conscious software programs are outlined.

 

Both primary and higher-order conscious creatures, it is proposed, must learn the laws of the perceived environment and body, using the technique of nondeterministic function induction.  They must then use the laws learned in this way to plan for the future. A “higher order conscious creature” must use these tools to perceive its own mind dynamics, doing function induction over instances of function induction, and making plans about plans as well as about elementary actions.  It can be shown how semantics, language, emotion and culture emerge from function induction and planning.  From the function induction system emerges a kind of long term memory and learning, while from the planning system emerges imagination, creativity and engineering. 

 

Finally, some existing prototype AI systems are reviewed in this theoretical context, including NARS and Webmind.

 

Epigenetic Programming, Cassio Pennachin and Ben Goertzel

 

There are two plausible paths to the creation of AGI: explicit digital brain engineering, and digital evolution.  This paper explores the second path, and more specifically the question: What kind of digital evolution framework might be adequate to lead to the emergence of intelligent “artificial life forms”? WE do not discuss software that is currently under development, but rather, software that we believe should be currently under development.  Even though we do not believe this will be as rapid a path to AGI as direct digital brain engineering, we believe the epigenetic programming approach will lead to different and valuable results.

 

Three key conceptual points are made.  First, that evolution in the strict Darwinian sense will never be enough; one needs artificial ecology.  Second, that the genotype/phenotype distinction is critical for the evolution of complex forms, meaning in computational terms that standard genetic programming must be replaced with “epigenetic programming” in which the artificial genetic material produced by crossover and mutation is not interpreted as an organism itself, but is rather used to seed a dynamical process resulting in the production of an organism.  Third, that this dynamical process must involve the complex interactions of many agents, whose emergent behaviors produce the phenotype.

 

Following the conceptual discussion, Webworld, a specific software framework intended to allow evolutionary-ecological epigenetic programming across internetworked machines, is briefly described.  And a specific proposal for digital epigenesis is put forth, loosely inspired by the details of mammalian genetics and proteomics, as well as by practical lessons learned from experimentation with related technologies like genetic programming and the Webmind AI Engine

 

What is the Natural Path Towards Artificial Intelligence, Vladimir Redko (Keldysh Institute of Mathematical Sciences)

 

AI is an area of applied researches. Experience demonstrates that an area of applied researches is successful, when there is a powerful scientific base for the area. Example: solid state physics was the scientific base for microelectronics in the second part of 20-th century. And results of microelectronics are colossal. Microelectronics is everywhere now. It should be noted that solid state physics is interesting for physicists from scientific point of view. So physicists made a lot in scientific basis of microelectronics independently of possible applications of their results.

 

What could be a scientific base of AI (analogous to the scientific base of microelectronics)?  We can consider this problem in the following manner. Natural human intelligence emerged through biological evolution. It is thus very interesting from a scientific point of view to study evolutionary processes that result in human intelligence, to study cognitive evolution, evolution of cognitive animal abilities. Moreover, investigations of cognitive evolution are very important from an epistemological point of view -- such investigations can clarify the very profound epistemological problem: why is human intelligence, human thinking, human logic applicable to cognition of nature? So we conclude,that investigation of cognitive evolution is the most natural scientific base for AI.

 

What could be the subject of investigations of cognitive evolution? The chapter outlines the “intelligent invention” of biological evolution (unconditional reflex, habituation, conditional reflex,…) to be modeled, conceptual theories (the metasystem transition theory by V.F.Turchin and  theory of functional systems by P.K. Anokhin) that can be considered as conceptual backgrounds of modeling of cognitive evolution, and modern approaches (Artificial Life, Simulation of Adaptive Behavior) to such modeling.

 

To exemplify possible researches, two concrete computer models: “Alife Model of Evolutionary Emergence of Purposeful Adaptive Behavior” and “Model of Evolution of Web Agents” are described. The first model is a pure scientific investigation, the second model is a step to practical applications. These models have a number of common features and illustrate possible interrelations between purely academic investigations of cognitive evolution (first model) and applied researches directed to Internet AI (second model).

 

Finally, a possible way from these simple concrete models to implementation of higher cognitive abilities is outlined.

 

 

The Internet as a Medium for Distributed Digital Intelligence, Stephan Vladimir Bugaj (Cognitive Bioscience LLC) and Ben Goertzel

 

This chapter explores the notion of transforming the Internet, or large portions thereof, into a globally distributed intelligent system.  The Internet contains both the raw computing power needed to support AI thought processes, and the diverse data needed to fill up an AI mind.  The practical obstacles in the way of such a pathway to AI are obviously quite significant.  However, the consistency of the Internet’s network structure with the self-organizing network structure underlying many leading approaches to artificial general intelligence, lends the approach an extraordinary appeal. 

 

The Webworld distributed computing platform, designed initially at Webmind Inc., is discussed in some detail, and some Webworld-based approaches to developing globally distributed digital intelligence are discussed.

 

Hebbian Logic: Achieving Artificial General Intelligence through Self-Organizing Neural Networks, Ben Goertzel

 

As compared to the complexity of an integrative AI system such as Webmind, or an ambitious, specialized AI hardware engine such as the CAM-Brain Machine, there is something appealingly simple about the formal neural network approach to general intelligence.  This article explores the question of how it might actually be possible to construct a thinking machine out of a self-organizing network consisting of a single node type and a single link type, with Hebbian-type learning rules.  The conclusion is that this is indeed possible, if the learning rules are made time-dependent in an appropriate way, and if the network is given an appropriate global architecture.

 

Designing Real AI From First Principles: A Cognitive Approach, Sergio Navega (Intelliwise Corp., Sao Paulo, Brazil)

 

Artificial Intelligence is a field of investigation that defied researchers for more than fifty years. Hundreds (if not thousands) of theories and algorithms have been devised. However, the results still seem modest. Why is this problem so hard to solve? In this paper, we will try to look at the problem from a different perspective. Eight-month-old infants are able to segment words from fluent speech just by perceiving different transitional probabilities between consecutive phonemes (Saffran et al. 1996). Seven-month-old infants, after a 2 minute habituation period, are able to distinguish sequences of phonemes with different generic structures, even if test sequences use unknown phonemes (Marcus 1999).  Artificial grammar learning has been demonstrated in 1-year-olds, leading to specific and abstract knowledge (Gomez & Gerken 1999). Far from being a domain specific ability (auditory, in these cases), such results are being replicated in different sensory modalities (Kirkham et al. 2001 shows the same statistical abilities using visual stimuli). These and other remarkable studies are the main inspiration for the proposal sketched in this chapter. The main point of this text is to show that real AI can be developed by the criterious study of the large amount of cognitive data collected in experiments with infants and adults over the last 20 years. The main goal is to find common points among diverse competences such as language acquisition, perception, reasoning, attention and other cognitive abilities. The architecture sketched here is composed roughly of three levels: one sub-symbolic, one symbolic and finally one propositional. This division, although merely academic and didactic, exibits an interesting side-effect, one that emerges from the natural and smooth operation of the three levels: it is autocatalytic. Knowledge derived from previous experiences -- whether sub-symbolic or propositional -- allows the perception of new (and more complex) structures sensed from the environment. The paper concludes by showing that progress in real AI seems achievable once we keep strict conformance to experimental cognitive data, not because it is the only way to go (it, obviously, isn't), but because this is a way that will surely lead to a good solution (our own mind).

 

 

 

ADDITIONAL CHAPTERS BY ADDITIONAL AUTHORS

WILL BE INSERTED AT THIS POINT IN THE BOOK

 

 

Lessons Learned the Hard Way: A Dialogue on Practical Experiences Attempting Implementation of Artificial General Intelligence, [all chapter authors]

 

One of the reasons that  AGI research is so uncommon is that taking complex, ambitious theories about mind and brain and translating them into functioning computer systems can be incredibly hard.   The practical difficulties sometimes seem to dwarf the conceptual difficulties, which themselves are obviously far from insubstantial.  By and large, the step from theory to implementation tends to be much more onerous with AGI than with mainstream single-task-focused AI.  

 

This chapter surveys the various difficulties experienced by the chapter authors in working toward functioning implementations of their AI designs, and reviews some of the lessons learned through overcoming (or in some cases succumbing to) these difficulties.  The chapter is presented in dialogic, conversational format, so as to highlight the differences and interactions between the different authors’ perspectives.