Sunday, March 06, 2005

Terrified Medieval Christs and Weird Picasso Women

Izabela and I went to the art museum in DC today, and I realized that pretty much my favorite thing in visual art is the expressions on peoples' faces. Even in the extensive galleries of medieval and renaissance pictures of Jesus, which generally make me sick after a few minutes (and aghast that so many humans could believe so many stupid things for so long -- and wondering what things almost as stupid as the whole crucifixion/resurrection drama we may currently believe --) , I found myself moved by some of the facial expressions.... Some painters (and fewer sculptors, Rodin being my favorite by far) have a remarkable way of catching exprssions that capture the essence of a person's being -- or at least, that give the illusion thereof ... they capture SOME essence, and how that essence relates to the actual human who was being painted doesn't matter much from my point of view....

The essence of some human's being -- what does that mean? The core of some human's personality.... It's different for each one of us, but still there are common patterns -- a common essence of being human. Always some pleasure and some pain. Some resignation to fate, some resolution to struggle. In the interesting faces, some deep joy, some terrible suffering. We humans are bundles of contradictions -- that's part of what makes us human.

I thought about the Singularity, of course -- about transcending what is human, and about perfecting what is human to make something that's human yet better than human. And I found myself really intuitively doubting the latter possibility. Isn't the essence of being human all bound up with contradiction and confusion, with the twisting nonstationary nonlinear superposition of pleasure and pain, of clarity and illusion, of beauty and hideousness?

Some humans are perverse by nature -- for instance, priests who condemn child molestation in their sermons while conducting it in their apartments. But even without this nasty and overt sort of self-contradiction, still, every human personality is a summation of compromises. I myself am a big teeming compromise, with desires to plunge fully into the realm of the intellect, to spend all day every day playing music, to hang out and play with my wife and kids all the time, to live in the forest with the pygmies, to meditate and vanquish/vanish the self....

Potentially with future technology we can eliminate the need for this compromise by allowing Ben to multifurcate into dozens of Bens, one living in the forest with the pygmies, one meditating all day and achieving perfect Zen enlightenment, one continually playing childrens' games and laughing, one proving mathematical theorems until his brain is 90% mathematics, one finally finishing all those half-done novels, one learning every possible musical instrument, one programming AI's, etc. etc. Each of these specialized Bens could be put in telepathic coordination with the others, so they could all have the experience, to an extent, of doing all these different things. This would be a hell of a great way to live IMO -- I'd choose it over my current existence. But it'd be foolish to call this being human. Getting rid of the compromises means getting rid of humanity.

The beauty I see in the faces portrayed by great artists is largely the beauty of how individual human personalities make their own compromises, patch together personal realities from the beauty and the terror and the love and the hate and the endless press of limitations. Getting rid of the compromises is getting rid of humanity....

Trite thoughts, I suppose.... Just another page in my internal debate about the real value of preserving humanity past the Singularity. Of course, I am committed to an ethic of choice -- I believe each sentient being should be allowed to choose to continue to exist in its present form, unless doing so would be radically dangerous to other sentient beings. Humans shouldn't be forced to transcend into uberhumans. But if they all chose to do so, would this be a bad thing? Intuitively, it seems to me that 90% of people who chose to remain human rather than to transcend would probably be doing so out of some form of perversion. And the other 10%? Out of a personality-central attachment to the particular beauty of being human, the particular varieties of compromises and limitations that make humans human ... the looks on the faces of the twisted medieval Christs and weird Picasso women....

(Of course, in spite of my appreciation for the beauty of the human, I won't be one of those choosing to turn down transcension. Though I may allow a certain percentage of my future multi-Bens to remain human ... time will tell!)

Introductory Whining and Complaining About the Difficulty of Getting Funding to Build a Real AI

A bunch of people have said to me, recently (in one version or another), "Ben, you write so much, why don't you write a blog like all the other pundits and gurus?"

My answer was that I don't have time, and I really don't -- but I decided to give it a try anyway. Last time I tried blogging was in 2002 and I kept going for a few months, then petered out. Maybe this time will have a better fate!

What's on my mind lately? Frustration, in large part. My personal life is going great -- last year my drawn-out divorce finally concluded; my kids are finally pretty much settled into their new routine and doing well again, and my new wife Izabela and I are having a great time together.

I'm enjoying doing bioinformatics research with Biomind, and recording whacky music using Sonar4 (the first time I've hooked up a sequencer to my keyboard for many years; I'd avoided it for a while due to its powerful addictive potential).

Life is good. But the problem is: the longer I think about it, the more I write about it and the more exploratory design and engineering work my Novamente colleagues and I do, the more convinced I am that I actually know how to make a thinking machine... an intelligent software program, with intelligence at the human level and beyond.

Yeah, I know, a lot of people have thought that before, and been wrong. But obviously, SOMEONE is going to be the first one to be right....

I don't pretend I have every last detail mapped out. There are plenty of little holes in my AI design, and they'll need to be filled in via an iterative, synergistic process of experimentation and theory-revision. But the overall conceptual and mathematical design is solid enough that I'm convinced the little holes can be filled in.

What's frustrating is that, though I can clearly see how to do it, I can also clearly see how much work it requires. Not a Manhattan Project scale effort. But more work than I could do in a couple years myself, even if I dropped everything else and just programmed (and even if I were a faster/better programmer like some of the young hacker-heroes on the Novamente team).
My guess is that 3 years of 100% dedicated effort by a team of 5-6 of the right people would be enough to create an AI with the intelligence of a human toddler. After that point, it's mostly a matter of teaching, along with incremental algorithm/hardware improvements that can be carefully guided based on observation of the AI mind as it learns.

And I have the right 5-6 people already, within the Novamente/Biomind orbit. But they're now spending their time on (interesting, useful) narrow-AI applications rather than on trying directly to build a thinking machine.

I thought for a while that we could create a thinking machine along the way, whilst focusing on narrow-AI applications. But it's not gonna work. Real AGI and narrow-AI may share software components, they may share learning algorithms and memory structures, but the basic work of building an AGI cognitive architecture out of these components, algorithms and structures has nothing to do with narrow AI.

As CEO of Biomind, a startup focused on analyzing biological data using some tools drawn from the Novamente AI Engine (our partially-complete, wannabe AGI system) and some other AI tools as well, I'm constantly making decisions to build Biomind software using methods that I know don't contribute much if at all toward AGI. This is because from a Biomind point of view, it's often better to have a pretty good method that runs reasonably fast and can be completed and tested relatively quickly -- rather than a better method that has more overlap with AGI technology, but takes more processor time, more RAM, and more development time.

Although our work on Biomind and other commercial apps has helped us to create a lot of tools that will be useful for building an AGI (and will continue to do so), the bottom line is that in order to create an AGI, dedicated effort will be needed. Based on the estimate I've given above (5-6 people for 3 years or so), it would seem it could be done for a million US dollars or a little less.

Not a lot of money from a big-business perspective. But a lot more than I have lying around, alas.

Some have asked why I don't just build the thing using volunteers recruited over the Net. There are two reasons.

One, this kind of project, doesn't just require programmers, it requires the right people -- with a combination of strong programming, software design, cognitive science, computer science and mathematical knowledge. This is rare enough that it's a hard combination to find even if you have money to pay for it. To find this combination among the pool of people who can afford to work a significant number of hours for free ... well the odds seem pretty low.... (Though if you have the above skills and want to work full or near-full-time on collaborating to build a thinking machine, for little or no pay, please send me an email and we'll talk!!)

Two, this is a VERY HARD project, even with a high-quality design and a great team, and I am not at all sure it can be successfully done if the team doesn't have total focus.

Well, I'm hoping the tides will turn in late 2005 or early 2006. Finally this year I'll release the long-awaited books on the Novamente design and the underlying ideas, and following that I'll attempt a serious publicity campaign to attract attention to the project. Maybe Kurzweil's release of his Singularity book in late 2005 will help, even though he's a skeptic about AGI approaches that don't involve detailed brain simulation. I'd much rather focus on actually building AGI than on doing publicity, but, y'know, "by any means necessary" etc. etc. ;-)

OK, that's enough venting for one blog entry! I promise that I won't repeat this theme over and over again, I'll give you some thematic variety.... But this theme is sure to come up again and again, as it does in my thoughts....

Very foolish of the human race to be SO CLOSE to something SO AMAZING, and yet not have the common sense to allocate resources to it instead of, for instance, the production of SpongeBob-flavored ice cream (not that I have anything against SpongeBob, he's a cute little guy...)...

P.S. Those with a taste for history may recall that in the late 1990's I did have a significant amount of funding for pure AI work, via the startup company Intelligenesis (aka Webmind), of which I was a cofounder. We tried for about 3 years and failed to create a real AI, alas. But this was not because our concepts were wrong. On the contrary, it was because we made some bad decisions regarding software engineering (too complex!), and because I was a bad manager, pursuing too many different directions at once instead of narrowly focusing efforts on the apparently best routes. The same concepts have now been shaped into a much simpler and cleaner mathematical and software design, and I've learned a lot about how to manage and focus projects. Success consists of failing over and over in appropriately different ways!