Ten Years to a PositiveSingularity

(If We Really, Really Try)

 

Ben Goertzel

 

(A talk presented at Transvision 2006, the annualconference of the World Transhumanist Association, held in Helsinki Finland inAugust 2006)

 

Since this is a conferenceof transhumanists, IÕm going to assume youÕre all familiar with the concept ofthe Singularity, as developed by Vernor Vinge, and popularized by Ray Kurzweiland others. 

 

The Singularity is supposedto be a moment – at some point in the future – when advances inscience and technology start occurring at a rate that is effectively infinitecompared to the processing speed of the human mind. 

 

ItÕs a point at whichintelligence way beyond human capability comes into play, and transforms theworld in ways we literally canÕt imagine.

 

This is a scary idea, and anexciting one. 

 

The Singularity could be theend of us all. 

 

Or -- it could be thefulfillment of all our dreams.

 

Ray Kurzweil, in his book TheSingularity Is Near, has made apretty careful argument that the Singularity actually is coming, and is goingto come sometime around the middle of the century.  HeÕs drawn a lot of exponential curves, showing the rate ofadvance of various aspects of science and technology, reaching toward infinityin the period from 2040 to 2050. He believes that when it comes, the Singularity will make all our livesbetter.  He sees humans becomingintegrated with various technologies – enhancing our brains and bodies,living as long as we want to, and fusing and networking with powerful AI minds.

 

On the other hand, Hugo deGaris, in his book The Artilect War,has shown a rather different kind of graph: a graph of the number of peoplewhoÕve died in different wars throughout history.  This number also increases exponentially.  De Garis worries that advances in AItechnology will create a world war – on the one side those who advocateletting AIÕs become superhuman, on the other side those who want to be surehumans remain the supreme beings on Earth.  Not a war about religion or money -- a world war aboutspecies dominance.  Possibly onethat could annihilate the species.

 

ThatÕs the paradox of theSingularity – itÕs our greatest dream and our worst nightmare, rolled upinto one.

 

Kurzweil, with hiscurve-plotting, positions the Singularity around 2050.   I think this is reasonable.  De Garis puts his Artilect War aroundthe same time.

 

But thereÕs a lot ofevidence showing how unreliable any curve-plotting is, regarding complexevents.  I think Ray and Hugo havemade reasonable arguments – but I can also see ways it could take a lotlonger -- or a lot shorter -- than they think it will.

 

How could it take a lotlonger?  What if terrorists nukethe major cities of world?  What ifanti-technology religious fanatics take over the worldÕs governments?  Or – less likely I think –we could hit up against tough scientific obstacles that we canÕt foresee rightnow.

 

How could it take a lot lesstime?  If the right people focustheir attention on the right things.

 

What IÕm going to tell youin this talk is why I think itÕs possible to create a positive Singularitywithin the next ten years.

 

Why ten years?

 

ItÕs a nice roundnumber.  Just like most of you, Ihave ten fingers on my hands.  Icould have said eight or thirteen instead.

 

But I think ten years– or something in this order of magnitude – could really beachievable.  Ten years to apositive Singularity.

 

Before getting started onthe Singularity, I want to tell you a story...

 

Thisis a fairly well known story, about a guy named George Dantzig (no relation tothe heavy metal singer Glenn Danzig!). Back in 1939, Danzig was studying for his PhD in statistics at theBerkeley.  He arrived late forclass one day and found two problems written on the board.  He thought they were the homeworkassignment, so he wrote them down, then went home and solved them.  He thought they were particularly hardand it took him a while – but he finished them and delivered thesolutions to the teacherÕs office, and left them on the teacherÕs desk.  Not knowing they were examples of"unsolvable" statistics problems, he mistook them for part of ahomework assignment, jotted them down, and solved them. Six weeks later,Dantzig's professor told Dantzig that heÕd prepared one of his two "homework"proofs for publication.  TheyhadnÕt been homework problems all – the two problems written on the boardhad been two of the greatest unsolved problems of mathematical statistics.  He wound up using his solutions to theproblems for his PhD thesis.

 

HereÕswhat Dantzig said about the situation: ÒIf I had known that the problem werenot homework but were in fact two famous unsolved problems in statistics, Iprobably would not have thought positively, would have become discouraged, andwould never have solved them.Ó

 

Dantzigsolved these problems because he thought they were solvable – he thoughtother people had solved them.  Hethought everyone else in his class was going to solve them.

 

ThereÕsa lot of power in expecting to win. Athletic coaches know about the power of streaks.  If a team is on a roll, they go intoeach game expecting to win – and their confidence helps them seeopportunities to win.  Smallmistakes are just shrugged away. But if a team is on a losing streak, they go into each game expecting toscrew up somehow – and a single mistake can put them in a bad mood, andone mistake piles up on another...

 

Totake another example, look at the Manhattan Project.  America thought they needed to create nuclear weapons beforethe Germans did.  They assumed itwas possible – and they felt a huge burning pressure to get therefirst.  Unfortunately, what theywere working on so hard, with so much brilliance, was an ingenious method forkilling a lot of people!  But,whatever you think of the outcome, thereÕs no doubt the pace of innovation inscience and technology in that project was incredible.

 

What if we knew it waspossible to create a positive Singularity in ten years?  What if we assumed we were going to win– as a provisional but reasonable hypothesis –

 

What if we thought everyoneelse in the class knew how to do it already?

 

What if we were worried thebad guys were going to get there first?  

 

Under this assumption, howthen would we go about trying to create a positive Singularity?

 

Look at the futurist technologiesaround us

 

Which ones have the mostlikelihood of bringing us a positive Singularity within the next ten years?

 

ItÕs obviously AI. 

 

Nano and bio and roboticsare all advancing fast, but they all require a lot of hard engineeringwork. 

 

AI requires a lot of hardwork too -- but itÕs a softer kind of hard work.  Creating AI relies only on human intelligence -- not onpainstaking and time-consuming experimentation with physical substances and biologicalorganisms. 

 

With AI, we just have tofigure out how to write the right program, and weÕre there.  Singularity!  Superhuman AI. The great dream, or the great nightmare.

 

But how can we get toAI?  There are two bigpossibilities:

 

Both approaches seemviable.  But the first approach hasa problem.  Copying the human brainrequires way more understanding of the brain than we have now.  Will biologists get there in ten yearsfrom now.  Probably not.  Definitely not in five years.

 

So weÕre left with the otherchoice – come up with something cleverer.  Figure out how to make a thinking machine – using allthe sources of knowledge at our disposal: computer science and cognitivescience and philosophy of mind and mathematics and cognitive neuroscience andso forth.

 

Well, this is what IÕve beenworking on for the last 20 years or so. IÕve done some programming and some mathematical calculation – andIÕve studied a lot of science and technology and philosophy – but morethan anything IÕve thought about the problem.

 

My conclusion is that thereare a lot of ways to create a mind. Human brains give rise to one kind of mind -- and not such a great kindreally.  If you think about it froma big picture perspective, we humans are really kind of stupid.  Yes, weÕre the smartest minds on Earthright now.  But itÕs not all thatincredibly difficult to think of ways to make minds better than human minds.

 

So if itÕs not thatincredibly difficult, why donÕt have AIÕs smarter than people right now?  Well of course, itÕs a lot of work tomake a thinking machine – but making cars and rockets and televisions isalso a lot of work, and society has managed to deal with those problems. 

 

The main reason we donÕthave real AI right now is that almost no one has seriously worked on theproblem.  And the ones that have,have thought about it in the wrong way. 

 

Some people have thoughtabout AI in terms of copying the brain – but that means you have to waittill the neuroscientists have finished figuring out the brain, which is nowherenear happening.  Trying to make AIbased on our current, badly limited understanding of the brain is a recipe forfailure.  We have no understandingyet of how the brain represents or manipulates abstraction.  Neural network AI is fun to play with-- but itÕs hardly surprising it hasnÕt led to human-level AI.   Neural nets are based onextrapolating a very limited understanding of a few very narrow aspects ofbrain function.

 

And the AI scientists whohavenÕt thought about copying the brain, have mostly made another mistake– theyÕve thought like computer scientists.  IÕm a math PhD – I was originally trained as amathematician – so I think I understand this. Computer science is likemathematics – itÕs all about elegance and simplicity.  You want to find beautiful, formalsolutions.  You want to find asingle, elegant principle – a single structure, a single mechanism– that explains a whole lot of different things.  A lot of modern theoretical physics isin this vein – the physicists are look for a single unifying equationunderlying every force in the universe. Well, most computer scientists working on AI are looking for a singlealgorithm or data structure underlying every aspect of intelligence.

 

But thatÕs not the way mindswork.  The elegance of mathematicsis misleading.  The human mind is amess – and not just because evolution creates messy stuff.  The human mind is a mess becauseintelligence -- when it has to cope with limited computing resources -- isnecessarily messy and heterogenous. 

 

Intelligence does include apowerful, elegant, general problem-solving component – some people havemore of it than others.  Somepeople I meet seem to have almost none of it at all.  

 

But intelligence also includesa whole bunch of specialized problem-solving components – dealing withthings like vision, socialization, learning physical actions, recognizingpatterns in events over time, and so forth.  This kind of specialization is necessary if youÕre trying toachieve intelligence with limited computational resources.

 

Marvin Minsky has introducedthe metaphor of a society.  He saysa mind needs to be a kind of society, with different agents carrying outdifferent kinds of intelligent actions and all interacting with eachother. 

 

But a mind isnÕt really likea society – it needs to be more tightly integrated than that.  All the different parts of the mind– parts which are specialized for recognizing and creating different kindsof patterns – these all need to operate very tightly together,communicating in a common language, sharing information, and synchronizingtheir activities. 

 

And then comes the mostcritical part -- the whole thing needs to turn inwards on itself.  Reflection ... introspection ... thisis one of the most critical kinds of specialized intelligence that we have inthe human brain.   And itrelies critically on our general intelligence ability.   A mind, if it wants to be reallyintelligent, has to be able to recognize patterns in itself just like itrecognizes patterns in the world – and it has to be able to modify andimprove itself based on what it sees in itself.  This is what ÒselfÓ is all about. 

 

This relates to what thephilosopher Thomas Metzinger calls the Òphenomenal self .Ó  All us humans carry around inside ourminds a Òphenomenal selfÓ – an illusion of a holistic being, a wholeperson, an internal self that somehow emerges from the mess of information anddynamics inside our brains.  Thisillusion is critical to what we are. The process of constructing this illusion is essential to the dynamicsof intelligence.

 

Brain theorists havenÕtunderstood the way the self emerges from the brain yet – brain mappingisnÕt advanced enough. 

 

And computer scientistshavenÕt understood the self – because it isnÕt about computerscience.  ItÕs about the emergentdynamics that happen when you put a whole bunch of general and specializedpattern recognition agents together – a bunch of agents created in a waythat they can really cooperate – and when you include in the mix agentsoriented toward recognizing patterns in the society as a whole.

 

The specific algorithms andrepresentations inside the pattern recognition agents – algorithmsdealing with reasoning, or seeing, or learning actions, or whatever –these algorithms are what computer science focuses on, and theyÕre important,but theyÕre not really the essence of intelligence.  The essence of intelligence lies in getting the parts to allwork together in a way that gives rise to the phenomenal self.   This is what I think IÕve figuredout, and embodied in the AI design I call Novamente. 

 

Novamente is a bunch ofcomputer science algorithms all wired together – some of the algorithms Iinvented myself, and some I borrowed from others, usually with bigmodifications.  But the key pointis that theyÕre wired together in a way that I think can let the whole systemrecognize significant patterns in itself.

 

When IÕm talking about AI, Iuse the word ÒpatternsÓ a lot, and I think itÕs critical.  I wrote a book recently called ÒTheHidden PatternÓ, which tries to get across the viewpoint that everything in theuniverse is made of patterns. Everything you see around you, everything you think, everything youremember – thatÕs a pattern!!!  

 

Intelligence, I think aboutas the ability to achieve complex goals in complex environments.  And complexity has to do with patterns– something is complex if it has a lot of patterns in it. 

 

A mind is a collection ofpatterns for effectively recognizing patterns.  Most importantly a mind needs to recognize patterns aboutwhat actions are most likely to achieve its goals. 

 

The phenomenal self is a bigpattern – and what makes a mind really intelligent is its ability tocontinually recognize this pattern – the phenomenal self – initself.

 

How Novamente works indetail is a pretty technical story, which IÕm not going to tell you rightnow.  IÕll mention the names ofsome of the major parts

But those words donÕt reallytell you anything. 

 

But the point I want to getacross now is the problem that I was trying to solve in creatingNovamente.  Not to find the onemagic representation or the one magic algorithm underlying intelligence.  Rather -- to piece together the rightkind of mess to give rise to theemergent structure of the self.

 

The Novamente design isfairly big.  ItÕs not as big as thedesign for Microsoft Word, let alone Windows XP – but itÕs big enoughthat for me to program it by myself would take many decades.

 

Right now we have a handfulof people working on Novamente full time. What weÕre doing now – as well as building out the basic AI -- isteaching Novamente to be a virtual baby. It lives in a 3D simulation world and tries to learn simple stuff likeplaying fetch and finding objects. ItÕs a long way from there to the Singularity – but thereÕs adefinite plan for getting from here to there.

 

The current staffing of theproject is not enough.  If we keepgoing at this rate, weÕll get there eventually – but we wonÕt have theSingularity in ten years.

 

But I still think we can doit – IÕm keeping my fingers crossed.  We donÕt need a Manhattan Project scale effort, all we needright now is the funding to get a dozen or so of the right people on theproject full time.

 

IÕve talked more about AIthan about the Singularity or positiveness.  Let me get back to those.

 

It should be obvious that ifyou can create an AI vastly smarter than humans, then pretty much anything ispossible. 


 

Or at least, once we reachthat stage, thereÕs no way for us – with our puny human brains – toreally predict whatÕs possible and what isnÕt.  Once the AI has its own self and has superhuman level intelligence,itÕs going to learn things and figure things out on its own.

 

But what about theÒpositiveÓ part?  How do we knowthis AI wonÕt annihilate us all – why wonÕt it decide weÕre a bad use ofmass-energy and repurpose our component particles for something more important?

 

ThereÕs no guarantee ofthis, of course. 

 

Just like thereÕs noguarantee that some terrorist wonÕt nuke my house tonight. 

 

But there are ways to makebad outcomes unlikely. 

 

The goal systems of humansare pretty unpredictable, but a software mind like Novamente is different– the goal system is better-defined.  So one reasonable approach is to make the first Novamente akind of Oracle.  Give it a goalsystem with one top-level goal: To answer peoplesÕ questions, in a way thatÕsdesigned to give them maximum understanding. 

 

If the AI is designed not tochange its top-level goal -- and its top-level goal is to sincerely andusefully answer our questions -- then the path to a positive Singularity seemsclear.

 

The risk of course is thatit changes its goals, even though you programmed it not to.  Every programmer knows you canÕt alwayspredict the outcome of your own code. But there are plenty of preliminary experiments we can do to understandthe likelihood of this happening. If our experiments show that Novamente systems tend to drift from theiroriginal goals, even when theyÕre programmed not to, then weÕll be worried– and weÕll slow down our work while we try to solve the problem.  But IÕll bet this isnÕt what happens.

 

So – a positiveSingularity in 10 years? 

 

Am I sure itÕspossible?  Of course not.

 

But I do think itÕsplausible.

 

And I know this:  If we assume it isnÕt possible, itwonÕt be.

 

And if we assume it is possible – and act intelligently on this basis– it really might be.  ThatÕsthe message I want to get across to you today.

 

There may be many ways tocreate a positive Singularity in ten years.  The way IÕve described to you – the AI route –is the one that seems clearest to me. There are six billion people in the world so thereÕs certainly room totry out many paths in parallel. 

 

But unfortunately the humanrace isnÕt paying much attention to this sort of thing.  Incredibly little effort and incrediblylittle funding goes into pushing toward a positive Singularity.  IÕm sure the total global budget forSingularity-focused research is less than the budget for chocolate candy– let alone beer ... or TV ... or weapons systems!

 

I find the prospect of apositive Singularity incredibly exciting – and I find it even moreexciting that it really, possibly could come about in the next ten years. ButitÕs only going to happen quickly if enough of the right people take the rightattitude -- and assume itÕs possible, and push for it as hard as they can.

 

Remember the story I startedout with – Dantzig and the unsolved problems of statistics.  Maybe the Singularity is likethat.  Maybe superhuman AI is likethat.  If we donÕt think aboutthese problems as impossibly hard – quite possibly theyÕll turn out to besolvable, even by mere stupid humans like us. 

 

This is the attitude IÕvetaken with the Novamente design. ItÕs the attitude Aubrey de Grey has taken with his work  on life extension.  The more people adopt this sort ofattitude, the faster the progress weÕll make.

 

We humans are funny creatures.  WeÕve developed all this science andtechnology -- but basically weÕre still funny little monkeylike creatures fromthe African savannah.  WeÕreobsessed with fighting and reproduction and eating and various apelikethings.   But if we really try,we can create amazing things -- new minds, new embodiments, new universes, andnew things we canÕt even imagine.