Ten Years to a Positive Singularity

(If We Really, Really Try)


Ben Goertzel


(A talk presented at Transvision 2006, the annual conference of the World Transhumanist Association, held in Helsinki Finland in August 2006)


Since this is a conference of transhumanists, IÕm going to assume youÕre all familiar with the concept of the Singularity, as developed by Vernor Vinge, and popularized by Ray Kurzweil and others. 


The Singularity is supposed to be a moment – at some point in the future – when advances in science and technology start occurring at a rate that is effectively infinite compared to the processing speed of the human mind. 


ItÕs a point at which intelligence way beyond human capability comes into play, and transforms the world in ways we literally canÕt imagine.


This is a scary idea, and an exciting one. 


The Singularity could be the end of us all. 


Or -- it could be the fulfillment of all our dreams.


Ray Kurzweil, in his book The Singularity Is Near, has made a pretty careful argument that the Singularity actually is coming, and is going to come sometime around the middle of the century.  HeÕs drawn a lot of exponential curves, showing the rate of advance of various aspects of science and technology, reaching toward infinity in the period from 2040 to 2050.  He believes that when it comes, the Singularity will make all our lives better.  He sees humans becoming integrated with various technologies – enhancing our brains and bodies, living as long as we want to, and fusing and networking with powerful AI minds.


On the other hand, Hugo de Garis, in his book The Artilect War, has shown a rather different kind of graph: a graph of the number of people whoÕve died in different wars throughout history.  This number also increases exponentially.  De Garis worries that advances in AI technology will create a world war – on the one side those who advocate letting AIÕs become superhuman, on the other side those who want to be sure humans remain the supreme beings on Earth.  Not a war about religion or money -- a world war about species dominance.  Possibly one that could annihilate the species.


ThatÕs the paradox of the Singularity – itÕs our greatest dream and our worst nightmare, rolled up into one.


Kurzweil, with his curve-plotting, positions the Singularity around 2050.   I think this is reasonable.  De Garis puts his Artilect War around the same time.


But thereÕs a lot of evidence showing how unreliable any curve-plotting is, regarding complex events.  I think Ray and Hugo have made reasonable arguments – but I can also see ways it could take a lot longer -- or a lot shorter -- than they think it will.


How could it take a lot longer?  What if terrorists nuke the major cities of world?  What if anti-technology religious fanatics take over the worldÕs governments?  Or – less likely I think – we could hit up against tough scientific obstacles that we canÕt foresee right now.


How could it take a lot less time?  If the right people focus their attention on the right things.


What IÕm going to tell you in this talk is why I think itÕs possible to create a positive Singularity within the next ten years.


Why ten years?


ItÕs a nice round number.  Just like most of you, I have ten fingers on my hands.  I could have said eight or thirteen instead.


But I think ten years – or something in this order of magnitude – could really be achievable.  Ten years to a positive Singularity.


Before getting started on the Singularity, I want to tell you a story...


This is a fairly well known story, about a guy named George Dantzig (no relation to the heavy metal singer Glenn Danzig!).  Back in 1939, Danzig was studying for his PhD in statistics at the Berkeley.  He arrived late for class one day and found two problems written on the board.  He thought they were the homework assignment, so he wrote them down, then went home and solved them.  He thought they were particularly hard and it took him a while – but he finished them and delivered the solutions to the teacherÕs office, and left them on the teacherÕs desk.  Not knowing they were examples of "unsolvable" statistics problems, he mistook them for part of a homework assignment, jotted them down, and solved them. Six weeks later, Dantzig's professor told Dantzig that heÕd prepared one of his two "homework" proofs for publication.  They hadnÕt been homework problems all – the two problems written on the board had been two of the greatest unsolved problems of mathematical statistics.  He wound up using his solutions to the problems for his PhD thesis.


HereÕs what Dantzig said about the situation: ÒIf I had known that the problem were not homework but were in fact two famous unsolved problems in statistics, I probably would not have thought positively, would have become discouraged, and would never have solved them.Ó


Dantzig solved these problems because he thought they were solvable – he thought other people had solved them.  He thought everyone else in his class was going to solve them.


ThereÕs a lot of power in expecting to win.  Athletic coaches know about the power of streaks.  If a team is on a roll, they go into each game expecting to win – and their confidence helps them see opportunities to win.  Small mistakes are just shrugged away.  But if a team is on a losing streak, they go into each game expecting to screw up somehow – and a single mistake can put them in a bad mood, and one mistake piles up on another...


To take another example, look at the Manhattan Project.  America thought they needed to create nuclear weapons before the Germans did.  They assumed it was possible – and they felt a huge burning pressure to get there first.  Unfortunately, what they were working on so hard, with so much brilliance, was an ingenious method for killing a lot of people!  But, whatever you think of the outcome, thereÕs no doubt the pace of innovation in science and technology in that project was incredible.


What if we knew it was possible to create a positive Singularity in ten years?  What if we assumed we were going to win – as a provisional but reasonable hypothesis –


What if we thought everyone else in the class knew how to do it already?


What if we were worried the bad guys were going to get there first?  


Under this assumption, how then would we go about trying to create a positive Singularity?


Look at the futurist technologies around us


Which ones have the most likelihood of bringing us a positive Singularity within the next ten years?


ItÕs obviously AI. 


Nano and bio and robotics are all advancing fast, but they all require a lot of hard engineering work. 


AI requires a lot of hard work too -- but itÕs a softer kind of hard work.  Creating AI relies only on human intelligence -- not on painstaking and time-consuming experimentation with physical substances and biological organisms. 


With AI, we just have to figure out how to write the right program, and weÕre there.  Singularity!  Superhuman AI.  The great dream, or the great nightmare.


But how can we get to AI?  There are two big possibilities:


Both approaches seem viable.  But the first approach has a problem.  Copying the human brain requires way more understanding of the brain than we have now.  Will biologists get there in ten years from now.  Probably not.  Definitely not in five years.


So weÕre left with the other choice – come up with something cleverer.  Figure out how to make a thinking machine – using all the sources of knowledge at our disposal: computer science and cognitive science and philosophy of mind and mathematics and cognitive neuroscience and so forth.


Well, this is what IÕve been working on for the last 20 years or so.  IÕve done some programming and some mathematical calculation – and IÕve studied a lot of science and technology and philosophy – but more than anything IÕve thought about the problem.


My conclusion is that there are a lot of ways to create a mind.  Human brains give rise to one kind of mind -- and not such a great kind really.  If you think about it from a big picture perspective, we humans are really kind of stupid.  Yes, weÕre the smartest minds on Earth right now.  But itÕs not all that incredibly difficult to think of ways to make minds better than human minds.


So if itÕs not that incredibly difficult, why donÕt have AIÕs smarter than people right now?  Well of course, itÕs a lot of work to make a thinking machine – but making cars and rockets and televisions is also a lot of work, and society has managed to deal with those problems. 


The main reason we donÕt have real AI right now is that almost no one has seriously worked on the problem.  And the ones that have, have thought about it in the wrong way. 


Some people have thought about AI in terms of copying the brain – but that means you have to wait till the neuroscientists have finished figuring out the brain, which is nowhere near happening.  Trying to make AI based on our current, badly limited understanding of the brain is a recipe for failure.  We have no understanding yet of how the brain represents or manipulates abstraction.  Neural network AI is fun to play with -- but itÕs hardly surprising it hasnÕt led to human-level AI.   Neural nets are based on extrapolating a very limited understanding of a few very narrow aspects of brain function.


And the AI scientists who havenÕt thought about copying the brain, have mostly made another mistake – theyÕve thought like computer scientists.  IÕm a math PhD – I was originally trained as a mathematician – so I think I understand this. Computer science is like mathematics – itÕs all about elegance and simplicity.  You want to find beautiful, formal solutions.  You want to find a single, elegant principle – a single structure, a single mechanism – that explains a whole lot of different things.  A lot of modern theoretical physics is in this vein – the physicists are look for a single unifying equation underlying every force in the universe.  Well, most computer scientists working on AI are looking for a single algorithm or data structure underlying every aspect of intelligence.


But thatÕs not the way minds work.  The elegance of mathematics is misleading.  The human mind is a mess – and not just because evolution creates messy stuff.  The human mind is a mess because intelligence -- when it has to cope with limited computing resources -- is necessarily messy and heterogenous. 


Intelligence does include a powerful, elegant, general problem-solving component – some people have more of it than others.  Some people I meet seem to have almost none of it at all.  


But intelligence also includes a whole bunch of specialized problem-solving components – dealing with things like vision, socialization, learning physical actions, recognizing patterns in events over time, and so forth.  This kind of specialization is necessary if youÕre trying to achieve intelligence with limited computational resources.


Marvin Minsky has introduced the metaphor of a society.  He says a mind needs to be a kind of society, with different agents carrying out different kinds of intelligent actions and all interacting with each other. 


But a mind isnÕt really like a society – it needs to be more tightly integrated than that.  All the different parts of the mind – parts which are specialized for recognizing and creating different kinds of patterns – these all need to operate very tightly together, communicating in a common language, sharing information, and synchronizing their activities. 


And then comes the most critical part -- the whole thing needs to turn inwards on itself.  Reflection ... introspection ... this is one of the most critical kinds of specialized intelligence that we have in the human brain.   And it relies critically on our general intelligence ability.   A mind, if it wants to be really intelligent, has to be able to recognize patterns in itself just like it recognizes patterns in the world – and it has to be able to modify and improve itself based on what it sees in itself.  This is what ÒselfÓ is all about. 


This relates to what the philosopher Thomas Metzinger calls the Òphenomenal self .Ó  All us humans carry around inside our minds a Òphenomenal selfÓ – an illusion of a holistic being, a whole person, an internal self that somehow emerges from the mess of information and dynamics inside our brains.  This illusion is critical to what we are.  The process of constructing this illusion is essential to the dynamics of intelligence.


Brain theorists havenÕt understood the way the self emerges from the brain yet – brain mapping isnÕt advanced enough. 


And computer scientists havenÕt understood the self – because it isnÕt about computer science.  ItÕs about the emergent dynamics that happen when you put a whole bunch of general and specialized pattern recognition agents together – a bunch of agents created in a way that they can really cooperate – and when you include in the mix agents oriented toward recognizing patterns in the society as a whole.


The specific algorithms and representations inside the pattern recognition agents – algorithms dealing with reasoning, or seeing, or learning actions, or whatever – these algorithms are what computer science focuses on, and theyÕre important, but theyÕre not really the essence of intelligence.  The essence of intelligence lies in getting the parts to all work together in a way that gives rise to the phenomenal self.   This is what I think IÕve figured out, and embodied in the AI design I call Novamente. 


Novamente is a bunch of computer science algorithms all wired together – some of the algorithms I invented myself, and some I borrowed from others, usually with big modifications.  But the key point is that theyÕre wired together in a way that I think can let the whole system recognize significant patterns in itself.


When IÕm talking about AI, I use the word ÒpatternsÓ a lot, and I think itÕs critical.  I wrote a book recently called ÒThe Hidden PatternÓ, which tries to get across the viewpoint that everything in the universe is made of patterns.  Everything you see around you, everything you think, everything you remember – thatÕs a pattern!!!  


Intelligence, I think about as the ability to achieve complex goals in complex environments.  And complexity has to do with patterns – something is complex if it has a lot of patterns in it. 


A mind is a collection of patterns for effectively recognizing patterns.  Most importantly a mind needs to recognize patterns about what actions are most likely to achieve its goals. 


The phenomenal self is a big pattern – and what makes a mind really intelligent is its ability to continually recognize this pattern – the phenomenal self – in itself.


How Novamente works in detail is a pretty technical story, which IÕm not going to tell you right now.  IÕll mention the names of some of the major parts

But those words donÕt really tell you anything. 


But the point I want to get across now is the problem that I was trying to solve in creating Novamente.  Not to find the one magic representation or the one magic algorithm underlying intelligence.  Rather -- to piece together the right kind of mess to give rise to the emergent structure of the self.


The Novamente design is fairly big.  ItÕs not as big as the design for Microsoft Word, let alone Windows XP – but itÕs big enough that for me to program it by myself would take many decades.


Right now we have a handful of people working on Novamente full time.  What weÕre doing now – as well as building out the basic AI -- is teaching Novamente to be a virtual baby.  It lives in a 3D simulation world and tries to learn simple stuff like playing fetch and finding objects.  ItÕs a long way from there to the Singularity – but thereÕs a definite plan for getting from here to there.


The current staffing of the project is not enough.  If we keep going at this rate, weÕll get there eventually – but we wonÕt have the Singularity in ten years.


But I still think we can do it – IÕm keeping my fingers crossed.  We donÕt need a Manhattan Project scale effort, all we need right now is the funding to get a dozen or so of the right people on the project full time.


IÕve talked more about AI than about the Singularity or positiveness.  Let me get back to those.


It should be obvious that if you can create an AI vastly smarter than humans, then pretty much anything is possible. 


Or at least, once we reach that stage, thereÕs no way for us – with our puny human brains – to really predict whatÕs possible and what isnÕt.  Once the AI has its own self and has superhuman level intelligence, itÕs going to learn things and figure things out on its own.


But what about the ÒpositiveÓ part?  How do we know this AI wonÕt annihilate us all – why wonÕt it decide weÕre a bad use of mass-energy and repurpose our component particles for something more important?


ThereÕs no guarantee of this, of course. 


Just like thereÕs no guarantee that some terrorist wonÕt nuke my house tonight. 


But there are ways to make bad outcomes unlikely. 


The goal systems of humans are pretty unpredictable, but a software mind like Novamente is different – the goal system is better-defined.  So one reasonable approach is to make the first Novamente a kind of Oracle.  Give it a goal system with one top-level goal: To answer peoplesÕ questions, in a way thatÕs designed to give them maximum understanding. 


If the AI is designed not to change its top-level goal -- and its top-level goal is to sincerely and usefully answer our questions -- then the path to a positive Singularity seems clear.


The risk of course is that it changes its goals, even though you programmed it not to.  Every programmer knows you canÕt always predict the outcome of your own code.  But there are plenty of preliminary experiments we can do to understand the likelihood of this happening.  If our experiments show that Novamente systems tend to drift from their original goals, even when theyÕre programmed not to, then weÕll be worried – and weÕll slow down our work while we try to solve the problem.  But IÕll bet this isnÕt what happens.


So – a positive Singularity in 10 years? 


Am I sure itÕs possible?  Of course not.


But I do think itÕs plausible.


And I know this:  If we assume it isnÕt possible, it wonÕt be.


And if we assume it is possible – and act intelligently on this basis – it really might be.  ThatÕs the message I want to get across to you today.


There may be many ways to create a positive Singularity in ten years.  The way IÕve described to you – the AI route – is the one that seems clearest to me.  There are six billion people in the world so thereÕs certainly room to try out many paths in parallel. 


But unfortunately the human race isnÕt paying much attention to this sort of thing.  Incredibly little effort and incredibly little funding goes into pushing toward a positive Singularity.  IÕm sure the total global budget for Singularity-focused research is less than the budget for chocolate candy – let alone beer ... or TV ... or weapons systems!


I find the prospect of a positive Singularity incredibly exciting – and I find it even more exciting that it really, possibly could come about in the next ten years. But itÕs only going to happen quickly if enough of the right people take the right attitude -- and assume itÕs possible, and push for it as hard as they can.


Remember the story I started out with – Dantzig and the unsolved problems of statistics.  Maybe the Singularity is like that.  Maybe superhuman AI is like that.  If we donÕt think about these problems as impossibly hard – quite possibly theyÕll turn out to be solvable, even by mere stupid humans like us. 


This is the attitude IÕve taken with the Novamente design.  ItÕs the attitude Aubrey de Grey has taken with his work  on life extension.  The more people adopt this sort of attitude, the faster the progress weÕll make.


We humans are funny creatures.  WeÕve developed all this science and technology -- but basically weÕre still funny little monkeylike creatures from the African savannah.  WeÕre obsessed with fighting and reproduction and eating and various apelike things.   But if we really try, we can create amazing things -- new minds, new embodiments, new universes, and new things we canÕt even imagine.