Wednesday, July 23, 2008

Self is to Long-Term Memory as Awareness is to Short-Term Memory

This is a brief addendum to a post I made a while back

http://www.goertzel.org/blog/2008/02/characterizing-consciousness-and-will.html

where I gave a casual but mathematical discussion of will and reflective awareness in terms of self-referential structures ("hypersets").

There, the following recursive definitions are given:

"S is conscious of X" is defined as: The declarative content that {"S is conscious of X" correlates with "X is a pattern in S"}

"S wills X" is defined as: The declarative content that {"S wills X" causally implies "S does X"}

Funky, huh? Chew on that for a while!

My point here is to posit a similar definition for that strange beast called the "phenomenal self" (and for a gloriously, Germanically thorough treatment of this entity, please read Thomas Metzinger's masterwork Being No One):

"X is part of S's self" is defined as: The declarative content that {"X is a part of S's self" correlates with "X is a persistent pattern in S over time"}

One thing that's nice about this definition is the relationship that it applies between self and awareness. In a formula:

Self is to long-term memory as awareness is to short-term memory

Elegant, huh?

Your self is nothing more or less than the awareness of your persistent being.

Your momentary awareness is nothing more or less than the self of your instantaneous being.

(Time-span makes a big difference! Indeed, time is almost equivalent to "difference." But that's a subject for another post, for another late-night fueled by too much green tea and too many weird ideas...)

Tuesday, July 22, 2008

Moderately Boring Musings on Will, Responsibility and Holism, Inspired By Some Difficult Personal Situations I'm Not Going To Talk About

Just some late-night thoughts that may not be that original or interesting, but seem to want to get written (to the tune of Charlie Parker's glorious "Au Privave", hot on the heels of "Step into the Realm" by the Roots, one of the few hip-hop bands I like at all...). I guess what I am inching toward here is some sort of cognitive theory of moral responsibility ... but I'm really inching there, one teeny little piece at a time.... (Well, some topics lend themselves to speed better than others....)

I'll start with will, and then move on to my main topic of taking personal "moral responsibility" for one's actions.

Don't worry, I haven't turned into a preacher yet (though I haven't shaved for a while and am sporting a fairly spiffy Jesus-like beard), my basic orientation on these topics is one of systems theory....

So, for starters: Anyone with any sense knows by now that the intuitive feeling of "free will" we have is illusory. Our unconscious decides for us, before any conclusion is derived by the process of conscious ratiocination that feels like it's making a decision.

So why bother with the decision process at all? Why not just go with the flow of the non-ratiocinative unconscious? Because we know that the intensely-conscious decision process helps dynamically restructure our long-term memory in a way that will help our unconscious make better decisions in the future.

Next: Anyone with any sense knows that the notion of "moral responsibility" is, to a large extent, a hanger-on from obsolete religious belief systems.

And, the notion of "taking personal responsibility" for one's actions has -- in most particular instances -- questionable empirical grounding. After all, anything any one of us does, is to a large extent caused by our social and physical context -- as Saddam famously said in the South Park movie: "It's not my fault that I'm so evil ... it's society ... society...." Of course, it really IS society ... that cute little pseudo-Saddam wasn't lying ... and in any particular case, none of us really has the information to tease out the internal from the external causes underlying any of our actions ... but yet, this is a poor perspective to take, in spite of the element of truth underlying it.

(Causality, in the end, is not really a scientific concept anyway: it's a tool that minds use to understand the world. A causes B, from the perspective of mind M, if

  • A precedes B
  • The probability of events in class B is differentially higher, given the prior and correlated occurrence of events in class A.
  • M can fairly confidently analogize that, if it were to carry out some action similar to A, then some event similar to B might be likely to follow
But that's a topic for another blog post, another day ... and is covered to some extent in the last chapter of my co-authored book Probabilistic Logic Networks, which Springer is supposed to publish this month...)

Sooo .... Why bother with the process of "taking moral responsibility" at all? Because we know that doing so helps us structure our long-term memory in a way that will help our unconscious take better actions in the future.

When we do something we wish we hadn't done, the act of assigning "responsibility" to ourselves causes us to insert a "correction signal" into our unconscious, which then modifies the structure of our internal declarative/procedural knowledge base in a way that makes it less likely we'll do similar regret-worthy things in the future. This is the case even though we (i.e. the deliberative, ratiocinative, "decision process" aspect of ourselves) don't know that much (rationally or intuitively) about how the unconscious works, and can't really untangle the various causal threads weaving through our minds and our worlds and leading to our actions.

The "ordinary waking state of consciousness" that most people occupy most of the time, involves a coupling of ratiocinative-decision-making with the free-will illusion, and a coupling of moral-responsibility-taking with some semi-religious notion of moral-agency. But it's possible to get into a state of mind where you carry out ratiocinative-decision-making and moral-responsibility-taking without any significant illusions attached to them ... simply because these cognitive dynamics tend to lead to effective overall system functioning.

Now, when I say "it's possible to get into a state of mind where X holds", am I saying that "by exercising one's free will, one can cause oneself to get into a state of mind where X holds" ?

Not really. What I'm saying is that "Sometimes the self-organizing dynamics of a mind coupled with an environment will result in that mind getting into a state of mind where X holds."

And what's the point of me telling you this? Well, some states of mind want to spread from one mind to another....

Much of what prompted the thoughts in this post, was a recent, extremely harrowing personal experience dealing with someone who systematically avoids taking personal responsibility for their own actions. (Sorry for those who like soap operas, but I'll keep mum about the details here!) Whenever this person does something destructive to themselves or others, which they later regret, they immediately assign blame to others. After all, this is incredibly easy to do, right? Every single action any one of us takes, is a result of a huge number of different factors, including things other people have said or done, the weather, what we ate last night, and probably some butterfly that flapped its wings in North China in 5544BC. So why "take the blame" for our own actions when there are so many other causal factors on which one can pin it?

Now, of course the whole notion of "blame" and "guilt" and "shame" is foolish and misguided, being based on religious (or at best quasi-religious) models of the self and the meaning of life and so forth. Yet, this doesn't mean there is ZERO value in taking responsibility, internally, for one's actions. If one does not internally take responsibility for one's own actions, one will never send those necessary correction-signals to one's own unconscious. Then one will just keep on doing those regrettable things.

Removing the obsolete, flawed quasi-religious concepts of blame, shame and so forth from one's inner mental landscape is an important step toward becoming a rational and self-aware, fully-realized person (of course, some people ... definitely non-Jewish ones ... never internalized these concepts in the first place...); but, once they are removed, they need to be replaced with something else ... they need to be replaced with a recognition of the mind as a holistic, complex dynamical system; and with a recognition of the role of the deliberative, ratiocinative aspect of mind as modulating the complex nonlinear dynamics of the unconscious.

None of us can control ourselves, none of us is fully aware of the dynamics by which we operate (in part because of basic information-theoretic limitations on the extent to which any finite system can understand itself; in part because of information-theoretically unnecessary limitations posed by the human brain architecture, which did not evolve in situations where acute self-awareness and mental self-control were key aspects of the fitness function). But "we" (the deliberative, ratiocinative "phenomenal self" portions of our minds) can modulate the dynamics of the other portions of our minds, via doing things like rational-decision-processes and responsibility-taking....

Anyways, I estimate the odds that this sort of highly abstract analysis is going to change the orientation of anyone who is strongly inclined to avoid taking responsibility for their own actions, at approximately 0.0001. Too bad. So, having unburdened myself of these overly abstract and largely useless thoughts, I'll now return to some more productive sorts of thinking ... infinite-order probabilities ... automatic learning of syntactic relationship types via statistical analysis of parallel corpuses ... ahhhhh ... much nicer than dealing with the perversities of human psychology and morality ;-))

Thursday, June 19, 2008

AGI in Xiamen ... and some rambling on the "creativity economy"

I just returned from 2.5 weeks in the Orient ... a week in Japan, doing biz meetings, going to a virtual worlds conference, seeing an awesome guitarist at a weird bar called BarTube, visiting an old friend, and hanging out w/ my son who is staying there for a month studying Japanese and playing Go ... a day or so in Seoul (visiting a humanoid robotics research group and a virtual pets company) ... a few days in Hong Kong giving a talk at WCCI's Human-Level AI session (on how to make a human-level NLP system by partly cheating, see the paper at http://novamente.net/papers) ... and a week in Xiamen, which is in China right across the water from Taiwan ... here's the beautifully situated Xiamen University ...


My friend Hugo de Garis (inventor of evolvable computing, prophet of the Artilect War and all around creative thinker)


(there's me and Hugo and his wife) ... is now a prof at Xiamen University and he's pulling together a humanoid robotics team, whose goal is to spend 4 years making an intelligent computer brain for a Nao humanoid robot:

Hugo and I have been plotting a way to make a clever Nao via using his evolved neural nets for perception and action, and the OpenCog system for cognition and overall system control. The Xiamen folks seem to like the plan and we're discussing the possibility of me spending a couple months there each summer to collaborate, and them funding some students to work on the OpenCog side of the project. If someone follows ahead with the idea I've been selling, of integrating a robot simulator (like Gazebo) with a virtual world (like OpenSim), this could synergize really nicely with Novamente's AI-in-virtual-worlds stuff....

And Xiamen would be a very nice place to spend summers...





I've been fascinated by China since youth, probably due to my mother doing grad work in Chinese history and philosophy back then. She gave me a bunch of Chinese history books and stories and poetry to read, which made me fascinated with the culture. When I was 17, halfway thru my 3rd year of university, I applied for a scholarship to go to China for a year and do research relating non-well-founded sets (hypersets) to Buddhist cognitive philosophy. But the scholarship required me to know Chinese and I didn't, so I didn't go.... (I've never been able to put much energy into learning languages... too much other interesting stuff to study and think about ... and I find it hard to pick up languages via immersion because of my habit of not paying attention to what anyone is saying or doing around me ... so I'm rarely actually immersed in anything but my own thoughts ;-O ) ...

ANYWAYS ... I met F2F Novamente & OpenCog's Chinese contributors, which was very nice... here we have (back: Lian Ruiting, Guo Junfei, Chen Shuo, Rui Liu, me)



Very smart, interested, ambitious people!

Hugo is convinced that China is the country of the future and America is already obsolete. He foresees a coming century of reverse brain drain, where China recruits smart scientists and engineers from Western nations....

It might happen -- I don't rule it out. Of course, unlike Hugo, I think some sort of technological Singularity is very likely by mid-century and maybe sooner -- but let's ignore that for the moment ... talking just in conventional political/cultural terms, it's not obvious to me that he's right.

No doubt China has very many very smart and ambitious and hardworking people (like the ones pictured above!) ... but the cultural differences w/ the West are profound and I don't think any of us understands what they mean in terms of the future of science and engineering.

One observation I like to make is as follows. People talk about the knowledge economy ... where manual work has long been outsourced to 3rd world countries, leaving 1st world countries increasingly consumed w/ knowledge work.

And more and more so, the US becomes a pragmatic knowledge integration economy -- specialized knowledge like programming and science gets farmed out to 3rd world countries, but the task of integrating together various pieces of knowledge for practical purposes is still done in America. Even in Novamente, which is a damn international company, we do programming and science and project management overseas, but the figuring-out of what programming and science needs to be done to serve business goals, is largely done in the US. Because the US is where our customer companies are -- even if their work is largely done overseas, the high-level staff defining their vision are mostly here. The matching-up of technology and business, where Novamente is concerned, occurs mainly within the arena of US culture. (We do have overseas customers, but they are either run by Americans or following business models that closely copy American ones.)

The next step, I think, is the creativity economy. Even integrative knowledge will become commoditized. Creation of new ideas will be the LAST thing to get commoditized. But this is exactly where America excels. No nation on Earth fosters creativity as well as the USA. And for this reason, I'm not so sure that America's period of dramatic success is over. The more science and technology accelerate, the more critical creativity becomes -- and, lame as American culture and institutions are, they seem better than most alternatives at fostering wide-ranging creativity. (The only cultures I've known that seemed maybe more creativity-friendly were Australia, New Zealand and Hungary. But those are small places, population-wise.)

There is loads of creativity in China, for instance, on a personal level. Very creative people. But I'm not sure the culture fosters creativity in the way that US culture does. Oriental culture seems to favor obedience a lot more than US culture, and creativity is often not compatible with obedience.... The US is probably the most anarchic major developed country -- which has its downsides, especially for those below the poverty line in the US -- but, it seems that anarchy and creativity are inextricably entwined.

If China evolves a culture of creativity, then Hugo will be proved right and this will become the Chinese century ... and maybe the Singularity will get launched in China (hey, maybe it will get launched there anyway via Hugo's and my collaboration!!!)..... But that's a big "if", I suppose. Yet one feature of Chinese history is its tendency toward sudden, radical changes of one sort or another. Time will tell.

Anyway I look forward to returning to Xiamen and other parts of China when my schedule permits (hopefully for a couple months next summer, and a couple weeks in the fall or winter) ... there is a definite energy there that I don't find in developed countries these days, nor in 3rd world countries ... there is a feeling of "waking up" and progress that is exciting...

And, more importantly, there is a possibility of creating a thinking machine and doing other amazing technology projects there more rapidly than in other parts of the world, due to the availability of brilliant scientists and engineers at a relatively low cost (esp. outside the tier 1 cities). Whether or not China develops a culture of creativity allowing it to "own" the next century, there are loads of opportunities for international collaboration ... like what Hugo and I are trying to set up....

But anyway. Enough rambling. I've been sleep-deprived since returning from China, due to jet lag issues ... tonight I'll go to sleep "early" (i.e. maybe by 1AM) and hopefully actually get a full night of sleep.. (yah right...)





Thursday, June 05, 2008

Eureeka!! -- The Underlying Logic Unifying Quantum Theory and General Relativity, Revealed Over a Plate of Sour Fish Consumed Over South China

Eureeka!! -- The Underlying Logic Unifying Quantum Theory and General Relativity, Revealed Over a Plate of Sour Fish Consumed Over South China; Plus Long Digressions on Mark Twain, the Pathetic Woes of Middle Age and the Good Old Mongolian Skin-Peeler

[I wrote this post 2 weeks ago, but didn't get around to posting it due to being in China, with a slow Net connection..]

En route from Seoul to Hong Kong, exhausted from a 5 hour night's sleep following a 4 hour night's sleep, over-jazzed by too much strong coffee (which I rarely drink), stomach-sickened by ordering and consuming random dishes in a Korean restaurant via pointing at random hieroglyphs on the menu and hoping vainly for the best ... head full of Mark Twain's wacky biography which I just finished ... irrationally nervous due to having left my oldest son in Japan to tour around on his own for a week before his Japanese class in Kanazawa starts (yes, he's mature enough to handle himself ... and Japan is a damnably safe place, aside from the risk of spending all your money ... but even I the ultimate anarchist parent can't help a bit of worry) ... dulled almost but not quite to a stupor by a relentless series of software technology oriented business meetings (all with wonderful and interesting people, but still, there's only so much meeting I can take) ... I picked up Lee Smolin's book on quantum gravity, which I bought for my physician-cum-maverick-physics-theorist father-in-law years ago but never read myself ... and while reading a totally irrelevant passage and eating the oddly sour fish that passes for food on Korean Air, some very simple and obvious ideas popped into my mind, and I realized to my surprise that, via converging together several streams that have been tumbling through my head for years, I'd happened upon what appeared to be the correct probabilistic logic of unified quantum gravity.

I'm eager to write up this logic in a paper, but, I've promised myself not to undertake anything serious -- except tasks critical for Novamente as a business, or in order to fulfill obligations already incurred -- until the OpenCog Prime (http://www.opencog.org) wiki pages are done (maybe another 20-30 hours of work, but hours for concentrated writing/editing work are very hard to come by these days due to the combination of business obligations and ongoing research projects needing supervision and/or feedback).... But I'll indulge myself in a brief blog post on the topic as a stopgap - partly to ensure the idea doesn't escape from my mind tonight when I finally slip into the deep sleep my body's been craving for 72 hours or so....

Twain's bio was a fascinating read, by the way. Three things among many others struck me, viewing his life-story from a selfish view in terms of its potential lessons for my own life. One is the way he spent a load of his time on stuff other than writing -- business of various sorts, as well as lecturing, traveling and so forth. But these "distractions" didn't seem to detract from his productivity as a writer as much as I would have thought -- they filled his head with stimulation and ideas, and no doubt made his writing more interesting than if he'd just sat home writing all day. Second is the romance he found in business pursuits ... which reminded me a bit of Rimbaud, who gave up poetry as a very young man after too few years as a writer, and wasted his twenties chasing African gold, ultimately dying from poisoning attained via wearing gold under his undies to hide it from thieves. Rimbaud, due to his premature death among other issues, failed to transform his digressive life experiences into art. I can see in my own psychology the excitement that the business world held for these people: it does stimulate parts of the mind that creative art and science don't touch. Finally I'm struck by the amount of real trash literature Twain produced. I'm reminded of Danilo Kis's (a truly great Serbian writer -- thx to Predrag Janicic for waking me up to him) comment that he didn't write his complete works, only his selected works. Twain was not like that. Twain's best work was awesome, his worst work was terrible. He could have omitted a good 50% of his production and his legacy would be greater not less. Philip K. Dick had the same property: there's Ubik ... and then there's Dr. Futurity.... The lesson for me is, I suppose, not to worry too much about spending time on apparently digressive pursuits (like writing this blog post, um) -- so long as they're feeding the creative engine one way or another -- and given the limited time I have for creative pursuits, to try hard to be more like Kis than Twain or Dick, and filter out crappier works before I take the time to produce them.

Another striking thing about Twain was the way he foresaw the power of machinery to alleviate human suffering. A lesson that seems obvious these days but was surprisingly poorly understood in his times, even though the industrial revolution was in full swing and new mechanical inventions of all sorts were pouring out of human minds at an amazing rate. If you haven't read it, his Connecticut Yankee in King Arthur's Court -- arguably the first American SF novel -- is a hilarious and deeply insightful premonition of the promise and peril of advanced technology. As a time travel fable, it's got Back to the Future beat by a long shot, without need of paradoxical absurdities beyond those intrinsic to human nature.

And now... what about quantum gravity...

Three threads need to be drawn together, into a single mathematical formalism. But there really seems no obstacle to doing so. (Except time of course, which is a distressingly rare commodity for me these days. Must confess to a bit of jealousy of my son as he wandered off from Tokyo to Kyoto, with no specific plan for spending his days, other than to amuse himself. I never intended to accumulate so many obligations -- keep companies running, organize conferences, pay other peoples' college tuition, close a mortgage, finish a book, pay child support each month, drive the kids to and from school, blah blah blah blah blah ... I once actually thought I'd live the life of the "free and easy wanderer" from the Chuang Tzu (on my mind as the flight I'm on approaches China), or maybe of Paul Erdos who freeloaded off one friend after another as he spent his life journeying around the world doing mathematics and taking drugs ... I never envisioned taking on all this responsibility for other people (kids, wife, ex-wife) and organizations (companies, non-profits, egads!) ... yet it's all wonderful, interesting stuff ... people and ideas I really care about ... so I'd really be an ass to complain ... it's a fantastic time to be alive ... yet not quite as fantastic as a few decades hence will likely be, when minds will be far more fully liberated from the horrifying/stultifying constraints of legacy human physiology ... but, well, anyway...)

Quantum gravity!

Thread one was invented by Saul Youssef, and I've written about it before. Check out the lovely bibliography he's assembled at 


Brilliant, brilliant man. A hero of our time! Someone give that man a muffin!!

The observation here is that if you're willing to take the step of assuming probabilities are complex rather than real numbers, the basic rules of quantum theory fall right out. This is one of those things that seems shocking and weird at first, and then seems tremendously obvious after you read through the math. Three cheers for Saul Youssef!!

Thread two is something I came up with a couple years ago, and wrote up in a paper which I'm in the middle of submitting for publication. I sent the paper to a journal and they sent it back asking me to provide names and mailing addresses of eight referees able to review the paper. I've been lagging on that task along with a huge amount of other stupid paperwork that's accumulated during the last N years. I guess the editor couldn't think of anyone to send it to. The idea, anyway, is infinite order probability.

An ordinary probability is a probability of an event. A probability distribution is a function that assigns a probability to each one of a set of mutually exclusive outcomes of some event (the different values assigned to different exclusive outcomes must sum to one). A second-order probability is a probability distribution over probability distributions ... it's a function that assigns a probability to each one of a set of probability distributions. A third-order probability ... etc.

An infinite-order probability is a function that assigns a probability to each one of a set of infinite-order probability distributions. Sounds odd, but it's a mathematically consistent idea, as I showed in my paper. I also showed that these oddball entities are closely related to some much more familiar and intuitive mathematical entities, Markov matrices.

The third thread is causal networks. A foundational notion in general relativity is causality. The causal network of events, in relativity, tells you for any pair (A, B) of events, which ones have the property that A is causal for B. This has to do with the finitude of the speed of light: if A and B are too close in time and too distant in space, there may be no way for A and B to causally affect each other.

If A and B are not causally related, there may still be some event C so that C is causal for both A and B. In that case we may say that, probabilistically speaking, A and B are independent conditional on C. That is,


P(A & B | C) = P(A | C) P(B |C)


The causal network gives us a set of independence assumptions on the space of events.

General relativity is in essence a dynamic on causal networks: it tells you how a causal network at one time (plus some extra information) gives rise to a different, related causal network at a subsequent time.

Finally let's reflect on what Smolin (see Three Roads to Quantum Gravity) calls the "strong holographic principle." His reasoning for this principle is subtle and involves the Bekenstein bound and related results, which state that all the information about the interior of some physical region, may actually be thought of as being contained on the surface of that regions. (He explains this better than I could, so I'll just refer you to his book.)

What the principle says is: a la Nietzsche, there are only surfaces. Re-read Nietzsche's Twilight of the Idols and you'll see that he presaged quantum gravity here, in a similar way to how he presaged quantum theory proper in his vision (in The Will to Power) of the universe as composed of a dancing swarm of discrete interacting quanta. Kant posited phenomena and noumena, Nietzsche saw only noumena. Smolin also. Smolin views the universe as a collection of surfaces, each one defined as a relationship among other surfaces. Put in words like this, it sounds mystical and fuzzy, but there's math to back it up -- the words just hint at the mathematical reality.

But is each of these Smolin surfaces definitively known? No. Each one is probabilistically known. And if each of these surfaces is to be thought of as a relationship between other surfaces, then this means each of these surfaces is most directly modeled as a hyperset (see my prior blog posts on these mathematical constructs). (This is not how Smolin models things mathematically, but I doubt he'd be opposed, as he's used equally recondite math structures such as topoi.) So these surfaces should be modeled as probabilistic hypersets -- aka infinite-order probability distributions.

But what kinds of probabilities should be involved in these distributions? Clearly, Youssef has taught us, these should be complex probability distros -- or in my variation, infinite-order complex probability distributions.

The inescapable conclusion is: The physical universe is a dynamically evolving causal network defined on an infinite-order complex probability distribution.

You read it first here, folks.. ;-O

Or, to put it a bit more conservatively: A useful, perhaps critical language for modeling quantum gravity phenomena is the logic of causal networks on infinite-order complex probability distributions.

There are fun connections here with the psychology of self-awareness and free will, as I've discussed in a couple previous blog posts (follow the links). According to those blog posts, a good way to model reflective awareness would be using infinite-order real probability distributions; and a good way to model will would be using causal networks on these distributions. What quantum theory introduces is the complex-number probability aspect, which makes everything counterintuitive and weird.

I hope I can really find time -- amidst the manifold obligations of middle age plus the not incidental life-task of creating superhuman AI, plus other distractions like bioinformatics and fiction and music and what-not ... and family and the occasional personal entertainment -- to write these ideas up carefully, because I really do think they have deep potential.

There seem to be more connections lurking here: the logic of causal networks seems somehow inescapably tied up with Clifford algebras, providing a tie-in with my algebra of multi-boundary forms (my only publication in a physics journal so far, but it's really a math paper). Presumably one can go from causality somehow straight to discrete Clifford algebras using some kind of axiomatic derivation, and from there to the various beautiful algebraic symmetries underlying modern physics ... Gell-Mann's "Eightfold Path" and its kin ... but anyway, the flight's about to land and the stewardess wants me to put away my laptop, so the blog post is gonna end .. I'll post it online when I get back to the hotel assuming there's functional internet there ... the inimitable Yan King Yin (famous on various AI email lists) is picking me up at the airport and I'm curious to meet him, although I'm so worn out I'm not sure I'll be lucid enough to milk the occasion's potential for lively AI discussions....

How about the bloody Yverse (see previous blog post on this)? Each Smolin surface ... each relation in the network of interdefining hyperrelations ... defines its own multiverse: a quantum multiverse relative to its own perspective. The network of surfaces (aka relationships) is then a Yverse. QED.

Another day, another dozen digressions ... it's SOOOOO tempting to take a few days and formalize the logic of causal nets over complex infinite-order distros, but instead (inbetween biz meetings and AI research meetings and conference speeches and meals with AI colleagues) I'll spend my "spare" hours in the next few weeks on the OpenCogPrime documents ... a very tedious matter of taking about 50 wiki pages from the Novamente wiki site and editing them down into OpenCogPrime rather than Novamente Cognition Engine pages... yecch...

(Any wealthy patrons out there want to hire me a secretary, a housecleaner and a scientific assistant? I can't promise the Singularity will be accelerated by a few years but it's a definite possibility. For sure a lot more fascinating math, art and science would be generated were I to be thus endowed. (And, getting back to Twain, I wonder what additional great works he would have produced if some of his rich friends had decided to fund him, sparing him the financial anxiety that led him to waste years of his life on various harebrained business schemes. Yeah, they provided grist for his creative mill ... but there's such a thing as too much grist and not enough time to mill.) But I can't complain too much (er, OK, wait, I guess I am...) ... whenever I get TOO frustrated at the realization that 50% of my really good ideas and creations will remain forever unarticulated or un-worked-out-in-detail because I've failed to be born rich or become rich (so far), I remind myself of my favorite Haruki Murakami character, the Mongolian Skin-Peeler ... a World War II torture artist who tortured Chinese prisoners of war by slowly peeling their skin off ... as I see Ulaanbaator on the video screen of the plane as it approaches Hong Kong (haha, I'm a bad boy and failed to shut off my laptop when instructed ... how very non-Oriental of me!!) it's hard for me not to feel thankful that I'm not one of his victims ... I've got my epidermis attached to my dermis, woo hoo! ... and I at least have time to work out a nontrivial percentage of the cool ideas and creations that course through my overheated brain...)

(While you're at it, imaginary patron, recruiting Novamente LLC a CEO with lots of game or virtual world industry experience would be nice. I think I'm doing a decent job as CEO, with help from Bruce and Cassio and Wendy and my other wonderful colleagues, but it would be nice to have a sufficiently complete management team that I could spend 80% of my Novamente-time on science rather than business. And you may as well recruit us a kick-ass project manager too, so Cassio can help me out with research and retire from project management. (Ok, dream on, Ben.... And remember the Mongolian Skin-Peeler....). And while you're at it, throw in maybe $1M per year so that I can actually fund a team of kick-ass programmers to build a thinking machine ... in case you haven't heard I have a pretty detailed and well-argued design for one, but it's getting built bloody fucking slowly due to lack of funding, and because it's not the sort of thing where partial progress yields exciting incremental results, any more than building 30% of a human brain would yield a 30% functional human... but I dididididididigress ;_)

(I think my wife is really, really tired of hearing about the Mongolian Skin-Peeler. He seems to occupy an unjustifiably prominent role in my emotional topography. Read The Wind-Up Bird Chronicle.)

Mark Twain, I add, would have been a hell of a blogger; far more entertaining than me. He wrote a dozen letters each day back then in the pre-digital dark ages.

Time to get off the plane.

Friday, May 02, 2008

Open-Source Robots + Robot Simulators + Virtual Worlds + AI = ???

I’ve been reading up on the iCub open-source humanoid robot lately, and I think it’s pretty exciting. Given what open source has done for Web browsers, bioinformatics tools and other sorts of software, the possibility of harnessing the same development methodology for robot hardware and software development seems almost irresistably exciting.

I’m no roboticist, but I do know something about the AI software that robots need to understand the world and act in it – and I’ve been doing a lot of work lately on the use of AI to control simulated agents in virtual worlds. In this vein, this blog entry contains some follow-up thoughts about the possibility of building connections between the iCub and various other relevant open-source software systems relevant to AI and virtual worlds.

For starters: What if someone made a detailed simulation of iCub in Gazebo, an open-source 3D robot simulation platform? Then folks around the world could experiment with iCub without even building a robot, simply via writing software and experimenting with the simulation. Experiments with other robots and Gazebo have shown that the simulation generally agrees very closely with real-world robotic experience.

And what if someone integrated Gazebo with OpenSim, the up-and-coming open-source virtual-world platform (which uses an improved version of Second Life’s user interface, but features a more sophisticatedly architected and flexible back end, and best of all it’s free)?

Furthermore, work is underway to integrate OpenSim with OpenCog, an open-source AI platform aimed at advanced machine cognition (yes, I’m one of the organizers of OpenCog); and OpenSim could similarly be integrated with OpenCyc, OpenNARS, and a host of other existing open-source AI platforms. Throngs of diversely customized, simulated iCubs controlled by various AI algorithms could mill around OpenSim, interacting with human-controlled avatars in the simulated world, learning and sharing their knowledge with each other. The behaviors and knowledge learned by the robots in the virtual world could then be transferred immediately back to their physically embodied brethren.

What stands between us and this vision is “just” some software integration work ... but of course, this kind of work isn’t easy and takes time and expertise. For various economic and cultural reasons, this sort of work has not been favored by any of the world’s major R&D funding sources – but the open-source approach seems to have increasingly high odds of getting it done. It seems at least plausible that iCub won’t go the way of OpenPINO and other prior attempts at open-source robotics, and will instead combine with other open-source initiatives to form a key part of a broadly-accepted, dynamically evolving platform for exploring physical and virtual humanoid robotics.

Sunday, April 06, 2008

Artificial Wisdom (... episodic memory, general intelligence, the Tao of John Coltrane, and so forth)

Every now and then, someone suggests to me that, alongside the pursuit of Artificial Intelligence, we should also be pursuing "Artificial Wisdom."

I always figured the "artificial wisdom" idea was probably just a bunch of useless English-language wordplay -- but one night last week, while watching Idiocracy with the kids for the second time (great movie exploring a non-Singularity-based future by the way ... highly recommend it!), I spent a while surfing the Web on my laptop refreshing my memory on how others have construed the "wisdom" concept and musing on what it might mean for AI.

Surprisingly enough, this led in some moderately interesting directions -- nothing revolutionary, but enough to justify the couple hours spent musing about it (and another 90 minutes or so synthesizing and writing up my glorious conclusions).

My main conclusion was a perspective in which wisdom is viewed as one of three core aspects of intelligence, associated with three distinct types of memory:

  • cleverness, associated with declarative memory (and the ability to manipulate abstract, certain or uncertain declarative knowledge)
  • skillfulness, associated with procedural memory (and the ability to effectively learn and adapt new procedures based on experience)
  • wisdom, associated with episodic memory (and insightful drawing of large-scale conclusions therefrom)

This being a blog post, though, rather than just presenting my conclusion, I'll start out by recounting some of the winding and mostly irrelevant path that led me there ;-)

Classical Conceptions of Wisdom

I started out with the dictionary, and as usual found it close to useless....

A typical dictionary definition of "wisdom," which is not a heck of a lot of help, is from Wiktionary, which tells us that

wisdom (plural wisdoms)

means

  1. An element of personal character that enables one to distinguish the wise from the unwise.
  2. A piece of wise advice.
  3. The discretionary use of knowledge for the greatest good.
  4. The ability to apply relevant knowledge in an insightful way, especially to different situations from that in which the knowledge was gained.
  5. The ability to make a decision based on the combination of knowledge, experience, and intuitive understanding.
  6. (theology) The ability to know and apply spiritual truths.
and furthermore that

wise

means

Showing good judgement or the benefit of experience.

Hoo haw.

These definitions don't give us any particularly interesting way of distinguishing "wisdom" from "intelligence." Essentially they define wisdom as either intelligence, spiritual insight, or the application of intelligence for ethical ends. Nothing new here.

Wikipedia is slightly more useful (but only slightly). Firstly it notes that

A standard philosophical, (philos-sophia: literally "lover of wisdom"), definition says that wisdom consists of making the best use of available knowledge.

It then notes some psychological research demonstrating that in popular culture, wisdom is considered as different from intelligence. Psychological researchers are quoted as saying that though "there is an overlap of the implicit theory of wisdom with intelligence, perceptiveness, spirituality and shrewdness, it is evident that wisdom is a distinct term and not a composite of other terms."

More interestingly, Wikipedia notes, Erik Erikson and other psychologists have argued that it is, in large part, the imminence of death that gives older human beings wisdom.

The knowledge of imminent death is seen as focusing the mind on concerns beyond its own individual well-being and survival, thus inducing a broader scope of understanding and an identification with the world at large, which are associated with the concept of wisdom.

This is interesting from a transhumanist perspective in that it suggests that the death of death would be the death of wisdom! I have seen some evidence for that in the incredible, shallow-minded selfishness of a certain subset of the transhumanist community -- people who are dead-set on having their own selves live forever, without any real thought as to why this might be valuable or what this might mean in a larger perspective. But of course, I don't really think death is the only or ultimate source of wisdom, though in a human context I can believe it's one of the main forces nudging us toward wisdom.

Paul Graham on Wisdom

One of the more interesting theories of wisdom I've run across (I found it a while ago for some random reason I've forgotten, and dug it up again last week) came from a contemporary blogger, Paul Graham:

http://paulgraham.com/wisdom.html

who distinguishes wisdom from intelligence in the following way:


"Wise" and "smart" are both ways of saying someone knows what to do. The difference is that "wise" means one has a high average outcome across all situations, and "smart" means one does spectacularly well in a few.

This explanation also suggests why wisdom is such an elusive concept: there's no such thing. "Wise" means something—that one is on average good at making the right choice. But giving the name "wisdom" to the supposed quality that enables one to do that doesn't mean such a thing exists. To the extent "wisdom" means anything, it refers to a grab-bag of qualities as various as self-discipline, experience, and empathy

Graham considers wisdom as partly a kind of de-biasing and cleansing of the mind, a notion that has some resonance with the modern notion of "Bayesian calibration" of the mind:

Recipes for wisdom, particularly ancient ones, tend to have a remedial character. To achieve wisdom one must cut away all the debris that fills one's head on emergence from childhood, leaving only the important stuff. Both self-control and experience have this effect: to eliminate the random biases that come from your own nature and from the circumstances of your upbringing respectively. That's not all wisdom is, but it's a large part of it. Much of what's in the sage's head is also in the head of every twelve year old. The difference is that in the head of the twelve year old it's mixed together with a lot of random junk.

Provocatively, Graham also posits that intelligence is quite different from wisdom, in that it has to do with accentuating rather than avoiding biases:

The path to intelligence seems to be through working on hard problems. You develop intelligence as you might develop muscles, through exercise. But there can't be too much compulsion here. No amount of discipline can replace genuine curiosity. So cultivating intelligence seems to be a matter of identifying some bias in one's character -— some tendency to be interested in certain types of things—- and nurturing it. Instead of obliterating your idiosyncrasies in an effort to make yourself a neutral vessel for the truth, you select one and try to grow it from a seedling into a tree.

To avoid confusion, from here on I'll sometimes refer to Graham's interpretation of these concepts as Graham-style wisdom and Graham-style intelligence, respectively.

There is an unclarity in Graham's essay as to the extent to which he thinks the kind of focusing and bias-accentuation that's part of Graham-style intelligence has to involve irrationality. My own view is that Graham-style intelligence definitely does NOT require an individual to be irrational, in the sense of making suboptimal judgments about a particular problem given the resources devoted to thinking about the problem. However, a finite system in a complex environment is always going to be irrational to some measure, due to not having enough resources to make a fully analysis of any complex situation. To the extent that Graham-style intelligence involves heavy focus on some particular set of topic areas, it's going to drain resources from other areas, thus making the mind less intelligent regarding these other areas.

So, in Graham's view, intelligence has to do with focusing loads of resources on processing in a handful of narrow domains that match one's innate biases, whereas wisdom has to do with evenly distributing processing across all the different domains in one's environment.

Along these lines Graham also notes (correctly, I think) that:

The wise are all much alike in their wisdom, but very smart people tend to be smart in distinctive ways.

As Graham conceives it, wisdom is basically equivalent to general intelligence: it's intelligence averaged across a variety of situations. In mathematics there exist various sorts of averages, some of which weight extreme values more heavily than others (these are p'th power averages). Graham's view would be that "wisdom" and "intelligence" are both estimates of general intelligence (defined as intelligence averaged over different domains/tasks), but with different sorts of averaging: in the case of intelligence, an averaging that pays especial attention to extremes (say a p-power average with p=5, or whatever); and in the case of wisdom, a more typical arithmetic averaging.

This is all sort of nice, but (as will become clear as the essay unfolds) I don't really think it gets at the crux of the matter.


Wisdom Goes Beyond the Individual

Another interesting perspective (that I also think doesn't get at the crux of the matter) is given in the paper "Meaning generation and artificial wisdom" with abstract

We propose an interpretation of wisdom in terms of meaning generation in social groups. Sapient agents are able to generate useful meanings for other agents beyond their own capability of generation of self-meanings. This makes sapient agents specially valuable entities in agent societies because they provide interagent reliable third-person meaning generation that provides some functional redundancy that contributes to enhance individual and social robustness and global performance.

Here wisdom is identified with the ability to generate meaning in the social group, going beyond meaning that is perceptible by the individual doing the meaning-generating. This harks back to Erikson's understanding of wisdom as related to identification with the world at large, beyond the mind/body.

This view also reminds me vaguely of Aldous Huxley's Perennial Philosophy, an attempt to distill the "wisdom teachings" of all the world's religions. In the Perennial Philosophy, wisdom teaches that the individual self is an illusion and all of us are one with the universe (and yet in a sense still distinct and individual.)

Mulling over all this, none of it really satisfied me. Of course, a folks concept like "wisdom" can't be expected to have a crisp and sensible formalistic definition ... but it still seemed to me that all the attempts at systematization and formalization I'd read about were missing some really essential aspects of the folk concept.

Wisdom, Cleverness and Skillfulness

And so, I came up with a totally different idea....

After a fair bit of musing, my mind kept drifting to the familiar distinction between declarative, procedural and episodic memory (drawn from textbook cognitive psych).

Remember:

  • Declarative knowledge = knowledge of facts, conjectures, hypotheses (abstract or concrete)
  • Procedural knowledge = knowledge of how to do things (could be physical, mental, social, etc.)
  • Episodic knowledge = knowledge of stories that have occurred in the history of intelligent beings (oneself, others one knows, others one has heard about,...)

One interesting thought that popped into my head is: The concept of wisdom, in its folk-psychology sense, has a lot to do with the ability to solve problems that are heavily dependent on context, using intuition that's based on large-scale analysis of one's episodic-memory store.

Or, less geekily: Wisdom consists of making intelligent use of experience.

A subtlety here is that this need not be one's own experience. Direct experience may be the best way to acquire wisdom (and surely this is part of the reason that wisdom is commonly associated with age) but some rare folks are remarkably gifted at absorbing wisdom from the experience of others -- absorbed via observation, via reading, or conversation, or whatever.

More broadly, this train of thought leads me to a sort of fundamental trinity of aspects of intelligence: cleverness, skillfulness and wisdom.

There's cleverness, which is the ability to appropriately manipulate, create and absorb declarative knowledge toward one's goals. This declarative knowledge may be abstract, or it may be concrete facts. Declarative knowledge is largely symbolic in nature, and cleverness is largely founded on adeptness at symbol-manipulation.

There's skillfulness, which is the ability to effectively do stuff in service of one's goals. This covers physical skills but also highly abstract mental skills like writing an essay, proving a theorem, or closing a business deal.

In some domains skillfulness can exist in the total absence of cleverness. The vast majority of shred metal guitarists would seem to fit in this category (to choose a somewhat random example based on what's playing in my headphones at the moment). These guys are so damn skilled, yet there's not much adept manipulation of meaning in their solos, or compositions. Compare the typical shred guitarist to Yngwie Malmsteen or Buckethead, who are also massively skilled (and in similar ways) -- but who are also highly clever in their symbolic manipulation of the abstract patterns characterizing the concrete sonic forms they're so skilled at producing.

In other domains, it's really hard for cleverness and skillfulness to emerge in any way except exquisitely intercombined. Mathematics is an example. Procedural knowledge at doing proofs is needed for fully understanding complex proofs -- because so many steps are left out in proofs as typically written down, if you don't know how to do proofs, you won't be able to fill in all the gaps in your head when you read a proof, so you'll never get more than a general understanding. On the other hand, it's even more obvious that deep declarative understanding and manipulation-ability regarding mathematical content is necessary to do mathematical proofs. Math is a domain where procedural and declarative intelligence have got to work in extremely tight synergy.

Finally, there's wisdom, which as I'm conceiving it here is the ability to intelligently draw conclusions from a vast repository of data regarding specific situations.

Human minds tend to organize data regarding specific situations using story-like, "narrative" structure, so that in human practice, wisdom often takes the form of the ability to mine appropriate abstract patterns from a vast pool of remembered stories.

Of course, the operation of human episodic memory is largely constructive -- we don't actually grab experiential data out of some sort of neurological database; rather, we synthesize stories from fragmentary images, stories, and such. Wisdom is about synthesizing appropriate stories from large databases of partially-remembered, ambiguous, fractional stories -- and then, as appropriate, using these stories to guide the creation of declarative or procedural knowledge.

In mathematics, wisdom is closely related to what's called "mathematical maturity" ... the general sense of how mathematics is done. Mathematical maturity guides the mind to interesting problems and interesting concepts ... and helps you choose an overall proof strategy (whereas it's cleverness and skillfulness that help you carry out the proof).

The transition from {cleverness + skillfulness} to wisdom in music is epitomized to me by the mid-to-late John Coltrane ... the Coltrane of "My Favorite Things" and "A Love Supreme." These are the solos of a man who has listened so much and played so much that he's disassembled thousands of different musical narratives and reassembled them to tell different kinds of stories, like no one ever told before. So much richer than the merely clever, skillful and emotionally moving solos of the early Coltrane. Certain works of great art manage to be intensely personal and dramatically universal at the same time, and
this often results from wisdom in the sense I'm defining it here.

Note that a mature mathematician or a world-changing jazz soloist need not be "wise" in the sense of a Taoist sage. The classical conception of wisdom has to do with making intelligent judgments based on large stores of experience in everyday human life. In the old days this was pretty much the only experience there was -- everyday human life plus various shamanic and psychedelic experiences.... But now the human world has become far more specialized, and it's possible to have a specialized wisdom, because it's possible to have a huge and rich store of episodic knowledge that's restricted to some special domain, like music or mathematics, or even a sufficiently complex game like Go or chess.

This vision of wisdom would seem to contradict Graham's, cited above -- he views wisdom as related to the ability to achieve goals over a broad variety of domains, in contract to intelligence which he conceives as a more narrowly domain-specialized intelligence.

But I don't think the contradiction is total.

I think that within a sufficiently rich and complex domain, one requires wisdom as I've defined it in order to achieve a really high level of intelligence. Learning skills and manipulating symbols is not enough. Direct and intelligent mining of massive experience-stores is needed.

I also think that wisdom, even if achieved initially and primarily within a certain domain, has a striking power to transcend domains. There are a lot of universal patterns among large stores of stories, no matter what the domain.

But even if the wisdom achieved by a great mathematician or chess player or jazz soloist helps that person to intuitively understand the way things work in other domains, this won't necessarily lead them to practical greatness in these other domains -- great achievement seems to require a synthesis of wisdom with either cleverness or skillfulness, and in some domains (like math or jazz improvisation) all three.

Defined-Problem versus Contextual Intelligence

Next, what does all this have to do with artificial intelligence?

One of the lessons learned in the last few decades of AI practice is that there is a pretty big difference between:

  1. Defined-problem intelligence: Problem-solving that occurs "after a crisply-defined problem statement has been identified", versus
  2. Contextual intelligence: problem-solving that is mainly concerned with interpreting general goals in the context of a complex situation, and, "figuring out what the context-specific problem is, in the first place" -- i.e. figuring out what crisply-defined problem, if solved in the relevant context, is likely to work toward the general goals at hand

I think this might be a more useful and more precise distinction than the "narrow AI" versus "general AI" distinction that I've often made before. It's ultimately getting at the same thing, but it's putting the point in a better way, I think.

What's narrow about "narrow AI" systems like chess-playing programs and medical diagnostic expert systems isn't merely that they're focused on specific, narrow domains. It's the fact that they operate based on defined-problem intelligence. It happens, though, that in some sufficiently specialized domains, defined-problem intelligence is enough to yield ass-kicking performance. In other domains it's not -- because in these other domains, figuring out what the problem is, is basically the problem.

I suggest that defined-problem intelligence is focused on declarative and procedural knowledge: i.e. it consists of cleverness or skillfulness or some combination thereof.

Logical reasoning systems, for example, are focused on declarative knowledge, and possess in some cases great facility at manipulating declarative knowledge.

Evolutionary learning systems and neural nets, on the other hand, are mainly focused on procedural knowledge -- on learning how to do stuff, without need for symbolic representations or symbol manipulations.

On the other hand: Contextual intelligence, I suggest, is a matter of knowing how to synthesize declarative and procedural knowledge, that representing problem-statements and problem-solutions, out of the combination of general goals and real-world situations.

I suggest that powerful contextual intelligence always relies upon powerful use of episodic memory, and associated mechanisms for storing, accessing, manipulating and analyzing sets of stories.

Or, briefly getting less geeky again: contextual intelligence requires wisdom.

Not at the level of the Taoist sage, John Coltrane or Riemann ... but at a way higher level than possessed by any currently operational AI system.

Note that defined-problem intelligence may sometimes draw on a wide body of background knowledge -- but it uses this background knowledge in a manner constrained by certain well-defined declarative propositions, or practical constraints on procedure-learning. It uses the background knowledge in a manner that doesn't require the background knowledge to be organized or accessed episodically -- rather, it uses background knowledge as a set of declarative facts, or data items, or constraints on actions, or procedures for doing specific things in specific types of situations.

"How to make a lot of money in Russia" is a problem that requires intense contextual as well as defined-problem intelligence. Whereas, "how to make a lot of money by trading oil futures on the Russian stock exchange" is more heavily weighted toward calculational intelligence, though it could be approached in a contextual-intelligence-heavy manner as well.

For instance, in the domain of bioinformatics, figuring out a rule that can diagnose a disease based on a gene expression microarray dataset, is a well-defined problem -- a problem that can be solved via focusing strictly on a small set of reasonably well-encapsulated information items. Declarative and/or procedural focused AI works well here ... much better than human intelligence.

On the other hand, figuring out which datasets are likely to be reliable, and figuring out how to normalize these datasets in a reasonable way based on the experimental apparatus described in the associated research paper, are tasks that require much more understanding of context, more milking of subtle patterns in episodic memory. I.e., I'm suggesting, more wisdom.

In the current practice of bioinformatic data analysis, human wisdom is needed to craft well-defined problems to feed into the superior (in this domain) declarative and procedural intelligence of narrow-AI bioinformatic data-analysis systems like the ones we've created at Biomind LLC.

Doing Time in the Universal Mind

Getting back to some of the ideas introduced at the start of this essay ... it seems all this ties in moderately closely with Erikson's definition and the Perennial Philosophy definition of "wisdom."

These definitions conceive wisdom as related to an understanding of life situations in a broader context than that of the individual body and mind. Wisdom as these thinkers conceive it, is a higher level of contextual intelligence than average humans display -- an ability to conceive daily situations in a broader-than-usual context.

This corresponds, really, to relying on a kind of collective episodic memory store, rather than just the episodic memory store corresponding to one's own life. By the time one is old, one is reviewing a longer life, and reviewing the past and future lives of one's children and grandchildren, and thinking about the whole scope of stories all these people may be involved in. A much richer contextuality.

Another ingredient of the Perennial Philosophy notion of wisdom is self-understanding, and I think that ties in here very closely too. One's own self is always part of the context, and to carry out really deep contextual understanding or problem-solving, one needs to appreciate how one's own history, knowledge and biases are affecting the situation and affecting one's own judgments. Powerful contextual intelligence -- unlike powerful calculational intelligence -- requires deep and broad self-understanding.

Wrapping Up

Sooo ... if we conceive wisdom as contextual intelligence powered by rich analysis of episodic memory, then it is clear that wisdom is a key aspect of general intelligence -- and is precisely the aspect that the AI research field has most abjectly ignored to date.

And it is also clear that ethical judgment is richly bound up with wisdom, as here conceived. Ethical judgment, in real life, is all about contextual understanding. It's not about following logical principles of ethics -- even when such principles are agreed-upon, real-life application always comes down to tricky context-specific intuitive judgments. Which comes down to understanding a vast pool of different situations, different episodes, that have existed in the lives of different human being and groups.

Defined-problem intelligence can be useful for ethical judgments. For instance in cases where scarce resources need to be divided fairly among a large number of parties with complex interrelationships and constraints, one has a well-defined problem of figuring out the optimally ethical balance, or a reasonable approximation thereof. But this actually seems an exceptional case, and the default case of ethical judgment seems to be to rely much more heavily on contextual than defined-problem intelligence.

Just to be clear: I'm not claiming that the conception of "wisdom" I've outlined here thoroughly captures all aspects of the natural-language/folk-psychology term "wisdom." Like "mind", "intelligence" and so forth, "wisdom" is a fuzzy term that amalgamates various different overlapping meanings ... it's not the kind of thing that CAN be crisply defined and analyzed once and for all.

What I hope to have done is to extract from the folks concept of wisdom some more precise, interesting and productive ideas, that closely relate to this folk concept but don't pretend to exhaust it.

In short...

  • General intelligence = defined-problem intelligence + contextual (problem-defining) intelligence
  • Calculational intelligence = cleverness (declarative intelligence) + skillfulness (procedural intelligence)
  • Contextual intelligence = in the human context, highly reliant on large-scale analysis of episodic memory
  • Wisdom = interestingly interpreted as contextual intelligence
  • Ethics = heavily reliant on wisdom

In this view, not surprisingly, the pursuit of Artificial Wisdom emerges as a subtask of the pursuit of Artificial General Intelligence. But what's interesting is it emerges as a complementary subtask to the one that most of the AI community is working on at the moment -- narrow-AI, or artificial defined-problem intelligence.

There is a bit of work in the AI community on narrative and story understanding. But most of this work seems, well, overly artificial. It has to do with formalistic systems for representing story structure. That is just not how we do things, in our human minds, and I suspect it's not an effective path at all.

I don't at the moment know any way to give an AGI system a rich understanding of episodes in the world than to actually embed it in the world and let is learn via experiencing. Virtual worlds may be a great start, given the amount of rich social interaction now occurring therein.

Thus I conclude that an excessive focus on narrow-AI research is, well, un-wise ;-)

And physically or virtually embodied AGI may potentially be a wise approach...

And I return again to the apparent wisdom of integrative AI approaches. Cleverness, skillfulness and wisdom are, I suggest, separate aspects of intelligence, which are naturally implemented in an AI system as separate modules -- but modules which must be architected for close inter-operation, because the real crux of general intelligence is the synergetic fusion of the three.

Friday, March 28, 2008

Buckets of Crumbs!!!

I just posted a way deeper and more interesting blog post a couple hours ago (using multiverse theory and Occam's Razor to explain why voting may often be rational after all), but I decided to post this sillier one tonight too because I have a feeling I'll forget if I put it off till tomorrow (late at night I'm willing to devote a little time to blogging in lieu of much-needed sleep ... tomorrow when I wake up there will be loads of work I'll feel obliged to do instead!)

This blog post just re-"prints" part of a post I made to the AGI email list today, which a couple people already asked me if they could quote.

It was made in response to a poster on the AGI list who made the argument that AGI researchers would be more motivated to work on building superhuman AGI if there were more financial gain involved ... and that, in fact, desire for financial gain MUST be a significant part of their motivation ... since AGI researchers are only human too ...

What I said is really simple and shouldn't need to have been said, but still, this sort of thing seems to require constant repetition, due to the nature of the society we live in...

Here goes:



Singularitarian AGI researchers, even if operating largely or partly in the business domain (like myself), value the creation of AGI far more than the obtaining of material profits.




I am very interested in deriving $$ from incremental steps on the path to powerful AGI, because I think this is one of the better methods available for funding AGI R&D work.




But deriving $$ from human-level AGI really is not a big motivator of mine. To me, once human-level AGI is obtained, we have something of dramatically more interest than accumulation of any amount of wealth.




Yes, I assume that if I succeed in creating a human-level AGI, then huge amounts of $$ for research will come my way, along with enough personal $$ to liberate me from needing to manage software development contracts or mop my own floor. That will be very nice. But that's just not the point.





I'm envisioning a population of cockroaches constantly fighting over crumbs of food on the floor. Then a few of the cockroaches -- let's call them the Cockroach Robot Club -- decide to spend their lives focused on creating a superhuman robot which will incidentally allow cockroaches to upload into superhuman form with superhuman intelligence. And the other cockroaches insist that the Cockroach Robot Club's motivation in doing this must be a desire to get more crumbs of food. After all, just **IMAGINE** how many crumbs of food you'll be able to get with that superhuman robot on your side!!! Buckets
full of crumbs!!!


(Perhaps after they're resurrected and uploaded, the cockroaches that used to live in my kitchen will come to appreciate the literary inspiration they've provided me! For the near future though I'll need to draw my inspiration elsewhere as Womack Exterminators seems to have successfully vanquished the beasties with large amounts of poisonous gas. Which I can't help feeling guilty about, being a huge fan of the film Twilight of the Cockroaches ... but really, I digress...)

I'm also reminded of a meeting I was in back in 1986, when I was getting trained as a telephone salesman (one of my lamer summer jobs from my grad school days ... actually I think that summer I had given up on grad school and moved to Las Vegas with the idea of becoming a freelance philosopher ... but after a couple months of phone sales, which was necessary because freelance philosophers don't make much money, I reconsidered and went back to grad school in the fall). The trainer, a big fat scary guy who looked and sounded like a meaner version of my ninth grade social studies teacher, was giving us trainee salespeople a big speech about how everyone wanted success, and he asked us how success was defined. Someone in the class answered MONEY and the trainer congratulated him and said: "That's right, in America success means money, and you're going to learn to make a lot of it!" The class cheered (a scene that could have been straight out of Idiocracy ... "I like money!"). Feeling obnoxious (as I usually was in those days), I raised my hand and asked the trainer if Einstein was successful or not ... since Einstein hadn't been particularly rich, I noted, that seemed to me like a counterexample to the principle that had been posited regarding the equivalence of success and financial wealth in the American context. The trainer changed the subject to how the salesman is like a hammer and the customer is like a nail. (By the way I was a mediocre but not horrible phone salesman of "pens, caps and mugs with your company name on them." I had to use the name "Ben Brown" on the phone though because no one could pronounce "Goertzel." If you were a small business owner in summer 1986 and got a phone call from an annoying crap salesman named Ben Brown, it was probably the 19 year old version of me....)


Thursday, March 27, 2008

Why Voting May Not be Such a Stupid Idea (A Multiversal Argument)

I haven't voted in any election for a heck of a long time ... but, in some conversations a couple years ago, an argument came up that actually seems like a reasonable argument why voting might be a good idea.

I'm not sure why I never blogged this before ... but I didn't ... so here goes ...


Why might voting be worthwhile, even though the chances that your vote breaks a tie in the election are vanishingly small?

Consider this: Would you rather live in a branch of the multiverse where the people like you vote, or where the people like you don't vote?

Obviously, if there are a lot of people like you, then you'll be better off in a branch where the people like you vote.

So: You should vote so as to be sure you're in one of those branches.

But, wait a minute. How do you know you won't end up in a branch where most of the people like you DON'T vote, but you vote anyway?

Well, you can't know that for sure. But, the question to ask is, which of the two swaths of possible universes are more probable overall:

Type 1) Ones in which everyone like you votes

Type 2) Ones in which most people like you don't vote, but you're the exception

Adopting an "Occam prior" that favors simpler possible universes over more complex ones, you arrive at the conclusion that Type 1 universes are more probable.

Now, this isn't an ironclad, universal argument for voting. If you're such a freak that all the people like you voting wouldn't make any difference, then this argument shouldn't convince you to vote.

Another counterargument against the above argument is that free will doesn't exist in the multiversal framework. What the heck does it mean to "decide" which branch of the multiverse to go down? That's not the kind of thing you can decide. Your decision process is just some dynamics that occurs on some branches and not others. It's not like your decision process steps out of the branching-process governing the multiverse and chooses which routes you follow....

But the thing is, deciding still feels like deciding from within your own human mind -- whether or not it's REALLY deciding in any fundamental physical sense.

So, I'm not telling you to decide anything. I'm merely (because it's what my internal dynamics are doing, in this branch of the multiverse that we're in) typing in some words that my internal dynamics believe may encourage you to carry out some of your own internal dynamics that may feel to you like you're deciding something. Right? Because, this is simply the way the universe is happening ... in this branch of the multiverse....

Don't decide anything. Just notice that these words are making you reflect on which branch of the multiverse you'd rather be in -- the one where everyone like you votes, or the one where they don't....

And of course it's not just about voting. It's really about any ethical behavior ... any thing such that we'd all be better off if everyone like us did that thing.

It's about compassion, for that matter -- we'd all be better off if everyone was more compassionate.... Would you rather be in the branch of the multiverse where everyone like you is compassionate, or....

Well, you get it.

But am I voting in this year's Presidential elections?

Out of all the candidates available, I'd definitely support Obama ... but nah, I think I'll probably continue my long tradition of lame citizenship and not vote.

I just don't think there are that many people like me out there ;-)

But if I read enough other blog posts like this one, I'd decide there was a large enough population of similar people out there, and I WOULD vote....

Tuesday, March 25, 2008

Quantum Voodoo in "Classical" Systems?

Way way back in the dark ages, when I was 19 years old and in my second year of grad school, I wrote a paper called "Holistic Indeterminacy" and submitted it to the journal Mind.

The basic idea was that, in some cases, very complex "classical" physical systems might literally display the same kind of indeterminacy associated with quantum systems.

The paper was printed out crappily on a dot matrix printer with dimly printed ink, and written in a not terribly professional way. It got rejected, and I've long since lost the thing. Furthermore, I never since found time to write up the ideas in the paper again. (Had there been a Web back then I would have posted the thing on my website, but this was the mid 1980's ... if I recall correctly, I hadn't even sent an email yet, at that point. I might actually have the paper on some old floppy disk in the basement, but odds are the data's long corrupted even if the disk is still around...).

But anyways ... please pardon these reminisces of an old man!! ... these old ideas of mine came up today in a conversation I was having with a friend over lunch, so I figured I'd take a few minutes to type them into a blog post (way less work than a paper!).

In fact these ideas are far more topical now than in the 1980's, as quantum computing is these days finally becoming a reality ... along with macroscopic quantum systems and all sorts of other fun stuff....

Partly because of these advances, and partly because the ideas have had decades to pervade my brain, I think I can now express the idea a bit more crisply than I did back then.

Still, it's a freaky and speculative train of thought, which I am not fully convinced makes any sense.

But at very least, it's some amusing hi-fi sci-fi.....

The basic idea is as follows.

Premise: Quantum logic is the logic of that which, in principle, cannot be observed. Classical logic is the logic of that which can, in principle, be observed.

The above may sound odd but it's not my idea -- it's the conclusion of a lot of work in quantum physics and the quantum theory of measurement, by serious physicists who understand such things far better than I do. It's way clearer now than it was in the mid 80's, though it was known to all the cool people even then....

Now is where things start to get weird. I want to make the above premise observer-dependent in a manner different from how quantum theory does it. Namely, I want to introduce an observer who, himself, has a finite capacity for understanding and observation -- a finite Kolmogorov complexity, for example.

This leads to my

Modest proposal: An observing system should use quantum logic to reason about anything that it, as a particular system, cannot in principle observe.

There are some things that a worm cannot observe, because it is just a worm; but I can observe. From the perspective of the worm, I suggest, these things should be reasoned about using quantum logic.

Similarly, there are some things that I cannot observe, in principle, because I am just a little old me.

Yes, I could potentially expand myself into a dramatically greater being. But, then that wouldn't help ME (i.e., my current self) to observe these things ... it would just help {some other, greater guy who had evolved out of me} to observe these things.

Of course, you can't step into the same river once ... and there is not really any ME that is persistent beyond an individual moment (and there are no individual moments!). But you can talk about a class of systems, and you can say that some observables are simply NOT observable by any system within that class. So systems within that class need to reason about these observables using quantum logic.

Where does complexity come into the picture? Well, among the things I can't in principle observe, are patterns of more complexity than can fit in my brain.

And among the things my deliberatively conscious mind can't in principle observe, are patterns of more complexity than can fit within its own very limited capacity.

So, if we interpret "quantum logic is the logic of things that can't in principle be observed" subjectively, as applying to particular real-world observing systems (including subsystems like the deliberatively conscious component of a human brain), then we arrive at the funky conclusion that maybe we should reason about each others' minds using quantum logic ... or maybe even, that we should reason about our own unconscious using quantum logic....

Funny idea, hmmm?

Way back when I wrote down some mathematics embodying these notions, but I don't feel like regenerating that right now. Although I'm a bit curious to see whether it had any validity or not ;-)

What made me think of this today was a discussion about consciousness, and the possibility (raised by the friend I was talking to) that some sort of wacky quantum voodoo is necessary to produce consciousness.

Maybe so. On the other hand, it could also be that any system complex enough to display the kind of rich deliberative consciousness we humans do, is complex enough that humans need to reason about it using quantum logic ... because in principle we cannot observe its dynamics (without becoming way more complex than we are, hence losing our self-ness...).

Ahhh... well I'll get back to doing the final edits on the Probabilistic Logic Networks book now ...

Monday, March 10, 2008

A New, Improved, Completely Whacky Theory of Evolution

This blog posts presents some really weird, speculative science, that I take with multiple proverbial salt-grains ... but, well, wouldn't it be funky if it were true?

The idea came to mind in the context of a conversation with my old friend Allan Combs, with whom I co-edit the online journal Dynamical Psychology.

It basically concerns the potential synergy between two apparently radically different lines of thinking:


Morphic Fields

The basic idea of a morphic field is that, in this universe, patterns tend to continue -- even when there's not any obvious causal mechanism for it. So that, for instance, if you teach thousands of rats worldwide a certain trick, then afterwards it will be easier for additional rats to learn that trick, even though the additional rats have not communicated with the prior one.

Sheldrake and others have gathered a bunch of evidence in favor of this claim. Some say that it's fraudulent or somehow subtly methodologically flawed. It might be. But after my recent foray into studying Ed May's work on precognition, and other references from Damien Broderick's heartily-recommended book Outside the Gates of Science (see my previous blog posts on psi), I'm becoming even more willing than usual to listen to data even when it goes against prevailing ideas.

Regarding morphic fields on the whole, as with psi, I'm still undecided, but interested. The morphic field idea certainly fits naturally with my philosophy that "the domain of pattern is primary, not the domain of spacetime"

Estimation of Distribution Algorithms

EDA's, on the other hand, are a nifty computer science idea aimed at accelerating artificial evolution (that occurs within software processes)

Evolutionary algorithms are a technique in computer science in which, if you want to find/create a certain object satisfying a certain criterion, you interpret the criterion as a "fitness function" and then simulate an "artificial evolution process" to try to evolve objects better and better satisfying the criterion. A population of candidate objects is generated at random, and then, progressively, evolving objects are crossed-over and mutated with each other. The fittest are chosen for further survival, crossover and mutation; the rest are discarded.

Google "genetic algorithms" and "genetic programming" if this is novel to you.

This approach has been used to do a lot of practical stuff -- in my own work, for example, I've evolved classification rules predicting who has cancer or who doesn't based on their genetic data (see Biomind); evolved little programs controlling virtual agents in virtual worlds to carry out particular tasks (see Novamente); etc. (though in both of those cases, we have recently moved beyond standard evolutionary algorithms to use EDA's ... see below...)

EDA's mix evolutionary algorithms with probabilistic modeling. If you want to find/create an object satisfying a certain criterion, you generate a bunch of candidates -- and then, instead of letting them cross over and mutate, you do some probability theory and figure out the patterns distinguishing the fit ones from the unfit ones. Then you generate new babies, new candidates, from this probability distribution -- throw them into the evolving population; lather, rinse, repeat.

It's as if, instead of all this sexual mating bullcrap, the Federal gov't made an index of all our DNA, then did a statistical study of which combinations of genes tended to lead to "fit" individuals, then created new individuals based on this statistical information. Then these new individuals, as they grow up and live, give more statistical data to throw into the probability distribution, etc. (I'd argue that this kind of eugenics is actually a plausible future, if I didn't think that other technological/scientific developments were so likely to render it irrelevant.)

Martin Pelikan's recent book presents the idea quite well, for a technical computer science audience.

Moshe Looks' PhD thesis presents some ideas I co-developed regarding applying EDA's to automated program learning.

There is by now a lot of mathematical/computational evidence that EDA's can solve optimization problems that are "deceptive" (hence very difficult to solve) for pure evolutionary learning. To put it in simple terms, there are many broad classes of fitness functions for which pure neo-Darwinist evolution seems prone to run into dead ends, but for which EDA style evolution can jump out of the dead ends.

Morphic Fields + EDA's = ??

Anyway -- now how do these two ideas fit together?

What occurred to Allan Combs and myself in an email exchange (originating from Allan reading about EDA's in my book The Hidden Pattern) is:

If you assume the morphic field hypothesis is true, then the idea that the morphic field can serve as the "probability distribution" for an EDA (allowing EDA-like accelerated evolution) follows almost immediately...

How might this work?

One argument goes as follows.

Many aspects of evolving systems are underdetermined by their underlying genetics, and arise via self-organization (coupled to the environment and initiated via genetics). A great example is the fetal and early-infancy brain, as analyzed in detail by Edelman (in Neural Darwinism and other writings) and others. Let's take this example as a "paradigm case" for discussion.

If there is a morphic field, then it would store the patterns that occurred most often in brain-moments. The brains that survived longest would get to imprint their long-lasting patterns most heavily on the morphic field. So, the morphic field would contain a pattern P, with a probability proportional to the occurrence of P in recently living brains ... meaning that occurrence of P in the morphogenetic field would correspond roughly to the fitness of organisms containing P.

Then, when young brains were self-organizing, they would be most likely to get imprinted with the morphic-field patterns corresponding to the most-fit recent brains....

So, if one assumes a probabilistically-weighted morphic field (with the weight of a pattern proportional to the number of times it's presented) then one arrives at the conclusion that evolution uses an EDA ...

Interesting to think that the mathematical power of EDA's might underly some of the power of biological evolution!

The Role of Symbiosis?

In computer science there are other approaches than EDAs for jumping out of evolutionary-programming dead ends, though -- one is symbiosis and its potential to explore spaces of forms more efficiently than pure evolution. See e.g. Richard Watson's book from a couple year back --

Compositional Evolution: The Impact of Sex, Symbiosis, and Modularity
on the Gradualist Framework of Evolution


and, also, Google "symbiogenesis." (Marginally relevantly, I wrote a bit about Schwemmler's ideas on symbiogenesis and cancer , a while back.)

But of course, symbiosis and morphic fields are not contradictory notions.

Hypothetically, morphic fields could play a role in helping organisms to find the right symbiotic combinations...

But How Could It Be True?

How the morphic fields would work in terms of physics is a whole other question. I don't know. No one does.

As I emphasized in my posts on psi earlier this year, it's important not to reject data just because one lacks a good theory to explain it.

I do have some interesting speculations to propound, though (I bet you suspected as much ;-). I'll put these off till another blog post ... but if you want a clue of my direction of thinking, mull a bit on

http://www.physics.gatech.edu/schatz/clocks.html

Sunday, March 09, 2008

Brief Report on AGI-08

Sooo....

The AGI-08 conference (agi-08.org) occurred last weekend in Memphis...!

I had hoped to write up a real scientific summary of AGI-08, but at the moment it doesn't look like I'll find the time, so instead I'll make do with this briefer and more surface-level summary...

Firstly, the conference went VERY well. The tone was upbeat, the discussions were animated and intelligent, and all in all there was a feel of real excitement about having so many AGI people in one place at one time.

Attendance was good: We originally anticipated 80 registrants but had 120+.

The conference room was a futuristic setting called "The Zone" that looked sorta like the Star Trek bridge -- with an excellent if mildly glitchy video system that, during Q&A sessions, showed the questioner up on a big screen in front of the room.

The unconventional format (brief talks followed by long discussion/Q&A) sessions was both productive and popular. The whole thing was video-ed and at some point the video record will be made available online (I don't know the intended timing of this yet).

The proceedings volume was released by IOS Press a few weeks before the conference and is a thick impressive-looking tome.

The interdisciplinary aspect of the conference seemed to work well -- e.g. the session on virtual-worlds AI was chaired by Sibley Verbeck (CEO of Electric Sheep Company) and the session on neural nets was chaired by Randal Koene (a neuroscientist from Boston University). This definitely made the discussions deeper than if it had been an AI-researchers-only crowd.

Plenty of folks from government agencies and large and small corporations were in attendance, as well as of course many AI academics and non-affiliated AGI enthusiasts. Among the AI academics were some highly-respected stalwarts of the AI community, alongside the new generation...

There seemed to be nearly as many Europeans as Americans there, which was a pleasant surprise, and some Asians as well.

The post-conference workshop on ethical, sociocultural and futurological issues drew about 60 people and was a bit of a free-for-all, with many conflicting perspectives presented quite emphatically and vociferously. I think most of that discussion was NOT captured on video (it took place in a different room where video-ing was less convenient), though the workshop talks themselves were.

The media folks in attendance seemed most energized by the section on AI in virtual worlds, which is because in this section the presenters (me, Andrew Shilliday, and Martin Magnusson) showed movies of cute animated characters doing stuff. This gave the nontechnical observers something to grab onto, which most of the other talks did not.

As at the earlier AGI-06 workshop, one of the most obvious observations after listening to the talks was that a lot of AGI research programs are pursuing fairly similar architectures and ideas but using different languages to describe what they're doing. This suggests that making a systematic effort at finding a common language and really understanding the true overlaps and differences of the various approaches, would be very beneficial. There was some talk of organizing a small, invitation-only workshop among practicing AGI system architects, perhaps in Fall 2008, with a view toward making progress in this direction.

Much enthusiasm was expressed for an AGI-09, and it was decided that this will likely be located in Washington DC, a location that will give us the opportunity to use the conference to help energize various government agencies about AGI.

There was also talk about the possibility of an AGI online technical journal, and a group of folks will be following that up, led by Pei Wang.

An "AGI Roadmap" project was also discussed, which would involve aligning different cognitive architectures currently proposed insofar as possible, but also go beyond that. Another key aspect of the roadmap might be an agreement on certain test environments or tasks that could be used to compare and explore various AGI architectures in more of a common way than is now possible.

Lots of ideas ... lots of enthusiasm ... a strong feeling of community-building ... so, I'm really grateful to Stan Franklin, Pei Wang, Sidney DeMello and Bruce Klein and everyone else who helped to organize the conference.

Finally, an interesting piece of feedback was given by my mother, who knows nothing about AGI research (she runs a social service agency) and who did not attend the conference but read the media coverage afterwards. What she said is that the media seems to be taking a far less skeptical and mocking tone toward AGI these days, as opposed to 7-10 years ago when I first started appearing in the media now and then. I think this is true, and it signifies a real shift in cultural attitude. This shift is what allowed The Singularity Is Near to sell as many copies as it did; and what encouraged so many AI academics to come to a mildly out-of-the-mainstream conference on AGI. Society, including the society of scientists, is starting to wake up to the notion that, given modern technology and science, human-level AGI is no longer a pipe dream but a potential near-term reality. w00t! Of course there is a long way to go in terms of getting this kind of work taken as seriously as it should be, but at least things seem to be going in the right direction.

Balancing concrete work on AGI with community-building work like co-organizing AGI is always a tricky decision for me.... But in this case, the conference went sufficiently well that I think it was worthwhile to deviate some time from the R&D to help out with it. (And now, back to the mass of other work that piled up for me during the conference!)

Yet More Rambling on Will (Beyond the Rules vs. Randomness Dichotomy)

A bit more on this nasty issue of will ... complementing rather than contradicting my previously-expressed ideas.

(A lot of these theory-of-mind blog posts are gonna ultimately get revised and make their way into The Web of Pattern, the sequel to The Hidden Pattern that I've been brewing in my mind for a while...)

What occurred to me recently was a way out of the old argument that "free will can't exist because the only possibilities are RULES versus RANDOMNESS."

In other words, the old argument goes: Either a given behavior is determined, or it's random. And in either case, where's the will? Granted, a random coin-toss (quantum or otherwise) may be considered "free" in a sense, but it's not willed -- it's just random.

What occurred to me is that this dichotomy is oversimplified because it fails to take two factors into account:

  1. A subjectively experienced moment occurs over a fuzzy span of time, not at a single physical moment
  2. "Random" always means "random with respect to some observer."

To clarify the latter point: "S is random to system X" just means "S contains no patterns that system X could identify."

System Y may be able to recognize some patterns in S, even though X can't.

And, X may later evolve into X1, which can recognize patterns in S.

Something that was random to me thirty years ago, or thirty seconds ago, may be patterned to me now.

Consider the perspective of the deliberative, rational component of my mind, when it needs to make a choice. It can determine something internally, or it can draw on an outside source, whose outcome may not be predictable to it (that is, it may make a "random" choice). Regarding outside sources, options include

  1. a random or pseudorandom number generator
  2. feedback from the external physical world, or from another mind in the vicinity
  3. feedback from the unconscious (or less conscious) non-deliberative part of the mind

Any one of these may introduce a "random" stimulus that is unpatterned from the point of view of the deliberative decision-maker.

But of course, options 2 and 3 have some different properties from option 1. This is because, in options 2 or 3, something that appears random at a certain moment, may appear non-random a little later, once the deliberative mind has learned a little more (and is thus able to recognize more or different patterns).

Specifically, in the case of option 3, it is possible for the deliberative mind to draw on the unconscious mind for a "random" choice, and then a half-moment later, import more information from the unconscious that allows it to see some of the patterns underlying the previously-random choice. We may call this process "internal patternization."

Similarly, in the case of option 2, it is possible for the deliberative mind to draw on another mind for a "random" choice, and then a half-moment later, import more information from the other mind that allows it to see some of the patterns underlying the previously random choice. We may call this process "social patternization."

There's also "physical patternization" where the random choice comes from an orderly (but initially random to the perceiving mind) process in the external world.

These possibilities are interesting to consider in the light of the non-instantaneity of the subjective moment. Because, the process of patternization may occur within a single experienced moment.

The subjective experience of will, I suggest, is closely tied to the process of internal patternization. When we have the feeling of making a willed decision, we are often making a "random" choice (random from the perspective of our deliberative component), and then immediately having the feeling of seeing some of the logic and motivations under that choice (as information passes from unconscious to conscious). But the information passed into the deliberative mind is of course never complete and there's always still some indeterminacy left, due to the limited capacity of deliberative mind as compared to unconscious mind.

So, what is there besides RULES plus RANDOMNESS?

There is the feeling of RANDOMNESS transforming into RULES (i.e. patterns), within a single subjective moment.

When this feeling involves patterns of the form "Willing X is causing {Willing X plus the occurrence of S}", then we have the "free will" experience. (This is the tie-in with my discourse on free will and hypersets, a few blog posts ago.)

That is, the deliberative content of recursive willing is automatized and made part of the unconscious, through repeated enaction. It then plays a role in unconscious action determination, which is perceived as random by the deliberative mind -- until, toward the tail end of a subjective moment, it becomes more patterned (from the view of the deliberative mind) due to receiving more attention.

Getting practical for a moment: None of this, as I see it, is stuff that you should program into an AGI system. Rather it is stuff that should emerge within the system as a part of its ongoing recognition of patterns in the world and itself, oriented toward achieving its goals. In this particular case the dynamics of attention allocation is key -- the process by which low-attention items (unconscious) can rapidly gain attention (become intensely deliberatively conscious) within a single subjective moment, but can also have a decisive causal impact prior to this increase in attention. The nonlinear dynamics of attention, in other words, is one of the underpinnings of the subjective experience of will.

What I'm trying to do here is connect phenomenology, cognitive science and AGI design. It seems to work, conceptually, in terms of according with my own subjective experience and also with known data on human brain/mind and my intuition/experience with AGI design.

Tuesday, February 19, 2008

Characterizing Consciousness and Will in Terms of Hypersets

This is another uber-meaty blog post, which reports a train of thought I had while eating dinner with my wife last night, which appears to me to provide a new perspective on two of the thorniest issues in the philosophy of mind: consciousness and will.

(No, I wasn't eating any hallucinogenic mushrooms for dinner; just some grilled chicken with garlic and ginger and soy sauce, on garlic naan. Go figure.)

These are of course very old issues and it may seem every possible perspective on them has already been put forth, without anything fundamentally being resolved.

However, it seems to me that the perspectives on these topics explored so far constitute only a small percentage of the perspectives that may sensibly be taken.

What I'm going to do here is to outline a new approach to these issues, which is based on hyperset theory -- and which ties in with various things I've written on these topics before, inspired by neuropsychology and systems theory and probabilistic logic and so on and so forth.

(A brief digressive personal comment: I've been sooooo overwhelmingly busy with Novamente-related business stuff lately, it's really been a pleasure to take a couple hours to write down some thoughts on these more abstract topics! Of course, no matter what I'm doing with my time as I go through my days, my unconscious is constantly churning on conceptual issues like the ones I talk about in this blog post -- but time to write down my thoughts on such things is so damn scant lately.... One of the next things to get popped off the stack is the relation of the model of will given here with ethical decision-making, as related to the iterated prisoner's dilemma, the voting problem, and so forth. Well, maybe next week ... or next month.... I certainly feel like working toward making a thinking machine for real, is more critical than exploring concepts in the theory of mind; but on a minute-by-minute basis, I have to admit I find the latter more fun....)

Hypersets

One of the intuitions underlying the explorations presented here is that possibly it's worth considering hypersets as an important part of our mathematical and conceptual model of mind -- and consciousness and will in particular.

A useful analogy might be the way that differential equations are an important part of our mathematical and conceptual model of physical reality. Differential equations aren't in the world; and hypersets aren't in the mind; but these sorts of mathematical abstractions may be extremely useful for modeling and understanding what's going on.

In brief, hypersets are sets that allow circular membership structures, e.g. you can have

A = {A}

A = {B,{A}}

and so forth. It follows that you can have functions that take themselves as arguments, and lots of other stuff that doesn't work according to the standard axioms of set theory.

While exotic, hypersets are well-defined mathematical structures, and in fact simple hypersets have fewer conceptual conundrums associated with them than the real number system (which is assumed in nearly all our physics theories).

The best treatment of hypersets for non-mathematicians that I know of is the book The Liar, which I highly recommend.

Anyway, getting down to business, let's start with consciousness, and then after that we'll proceed to will.

Disambiguating Consciousness

Of course the natural language term "consciousness" is heavily polysemous, and I'm not going to try to grapple with every one of its meanings. Specifically, I'm going to focus on the meaning that might be specified as "reflective consciousness." Which is different from the "raw awareness" that, arguably, worms and bugs have, along with us bigger creatures.

Raw awareness is also an interesting topic, though I tend toward a kind of panpsychism, meaning that I tend to believe everything (even a rock or an electron) possesses some level of raw awareness. Which means that raw awareness is then just an aspect of being, rather than a separate quality that some entities possess and not others.

Beyond raw awareness, though, it's clear that different entities in the universe manifest different kinds of awareness. Worms are aware in a different way than rocks; and, I argue, dogs, pigs, pigeons and people are aware in a different way from worms. What I'll (try to) deal with here is the sense in which the latter beasts are conscious whereas worms are not -- i.e. what might be called "reflective consciousness." (Not a great term, but I don't know any great terms in this domain.)

Defining Reflective Consciousness

So, getting down to business.... My starting-point is the old cliche' that

Consciousness is consciousness of consciousness

This is very nice, but doesn't really serve as a definition or precise characterization.

In hyperset theory, one can write an equation

f = f(f)

with complete mathematical consistency. You feed f, as input, f; and you receive, as output, f.

It seems evident, though, that while this sort of anti-foundational recursion may be closely associated with consciousness, this simple equation itself doesn't tell you much about consciousness. We don't really want to say

Consciousness = Consciousness(Consciousness)

I think it's probably more useful to say:

Consciousness is a hyperset, and consciousness is contained in its membership scope

Here by the "membership scope" of a hyperset S, what I mean is the members of S, plus the members of the members of S, etc.

This is no longer a definition of consciousness, merely a characterization.

What is says is that consciousness must be defined anti-foundationally as some sort of construct via which consciousness builds consciousness from consciousness -- but it doesn't specify exactly how.

Next, I want to introduce the observation, which I made in The Hidden Pattern (and in an earlier essay) that the subjective experience of being conscious of some entity X, is correlated with the presence of a very intense pattern in one's overall mind-state, corresponding to X. This idea is also the essence of neuroscientist Susan Greenfield's theory of consciousness (but in her theory, "overall mind-state" is replaced with "brain-state").

Putting these pieces together (hypersets, patterns and correlations), we arrive at the following working definition of consciousness:

"S is conscious of X" is defined as: The declarative content that {"S is conscious of X" correlates with "X is a pattern in S"}

In other words: Being conscious of a pig, means having in one's mind declarative knowledge of the form that one's consciousness of that pig is correlated with that pig being a pattern in one's overall mind-state.

Note that this declarative knowledge must be expressed in some language such as hyperset theory, in which anti-foundational inclusions are permitted. But of course, it doesn't have to be a well-formalized language -- just as pigeons, for instance, can carry out deductive reasoning without having a formalization of the rules of Boolean or probabilistic logic in their brains. All that is required is that the conscious mind has an internal informal language capable of expressing and manipulating simple hypersets.

To make this formal, one requires also a definition of pattern, which I've supplied in The Hidden Pattern.

OK, so much for consciousness. Now, on to our other old friend, will.

Defining Will

The same approach, I suggest, can be used to define the notion of "will," by which I mean the sort of willing process that we carry out in our minds when we subjectively feel like we are deciding to make one choice rather than another.

In brief:

"S wills X" is defined as: The declarative content that {"S wills X" causally implies "S does X"}

To fully explicate this is slightly more complicated than in the case of consciousness, due to the need to unravel what's meant by "causal implication." This is done in my forthcoming book Probabilistic Logic Networks in some detail, but I'll give the basic outline here.

Causal implication may be defined as: Predictive implication combined with the existence of a plausible causal mechanism.

More precisely, if A and B are two classes of events, then A "predictively implies B" if it's probabilistically true that in a situation where A occurs, B often occurs afterwards. (Yes, this is dependent on a model of what is a "situation", which is assumed to be part of the mind assessing the predictive implication.)

And, a "plausible causal mechanism" associated with the assertion "A predictively implies B" means that, if one removed from one's knowledge base all specific instances of situations providing direct evidence for "A predictively implies B", then the inferred evidence for "A predictively implies B" would still be reasonably strong. (In a certain logical lingo, this means there is strong intensional evidence for the predictive implication, along with extensional evidence.)

If X and Y are particular events, then the probability of "X causally implies Y" may be assessed by probabilistic inference based on the classes (A, B, etc.) of events that X and Y belong to.

In What Sense Is Will Free?

But what does this say about the philosophical issues traditionally associated with the notion of "free will"?

Well, it doesn't suggest any validity for the idea that will somehow adds a magical ingredient beyond the familiar ingredients of "rules" plus "randomness." In that sense, it's not a very radical approach. It fits in with the modern understanding that free will is to a certain extent an "illusion."

However, it also suggests that "illusion" is not quite the right word.

The notion that willed actions somehow avoid the apparently-deterministic/stochastic nature of the universe is not really part of the subjective experience of free will ... it's a conceptual add-on that comes from trying to integrate our subjective experience with the modern scientific understanding of the world, in an overly simplistic and partially erroneous way.

An act of will may have causal implication, according to the psychological definition of the latter, without this action of will violating the basic deterministic/stochastic equations of the universe. The key point is that causality is itself a psychological notion (where within "psychological" I include cultural as well as individual psychology). Causality is not a physical notion; there is no branch of science that contains the notion of causation within its formal language.

In the internal language of mind, acts of will have causal impacts -- and this is consistent with the hypothesis that mental actions may potentially be ultimately determined via determistic/stochastic lower-level dynamics. Acts of will exist on a different level of description than these lower-level dynamics.

The lower-level dynamics are part of a theory that compactly explains the behavior of cells, molecules and particles; and some aspects of complex higher-level systems like brains, bodies and societies. Will is part of a theory that compactly explains the decisions of a mind to itself.

My own perspective is that neither the lower-level dynamics (e.g. physics equations) nor will should be considered as "absolutely real" -- there is no such thing as absolute reality. The equations of physics, glorious as they are, are abstractions we've created, and that we accept due to their utility for helping us carry out various goals and recognize various regularities in our own subjective experience.


Connecting Will and Consciousness


Connecting back to our first topic, consciousness, we may say that:


In the domain of reflective conscious experiences, acts of will are experienced as causal.

This of course looks like a perfectly obvious assertion. What's nice is that it seems to fall out of a precise, abstract characterization of consciousness and will.

Free Will and Virtual Multiverse Modeling

In a previous essay, written a few years back and ultimately incorporated into The Hidden Pattern, I gave an analysis of the psychological dynamics underlying free will, the essence of which may be grokked from the following excerpt:

For example, suppose I am trying to decide whether to kiss my beautiful neighbor. One part of my brain is involved in a dynamic which will actually determine whether I kiss her or not. Another part of my brain is modeling that first part, and doesn’t know what’s going to happen. A virtual multiverse occurs in this second part of the brain, one branch in which I kiss her, the other in which I don’t. Finally, the first part comes to a conclusion; and the second part collapses its virtual multiverse model almost instantly thereafter.

The brain uses these virtual multiverse models to plan for multiple contingencies, so that it is prepared in advance, no matter what may happen. In the case that one part of the brain is modeling another part of the brain, sometimes the model produced by the second part may affect the actions taken by the first part. For instance, the part (call it B) modeling the action of kissing my neighbor may come to the conclusion that the branch in which I carry out the action is a bad one. This may affect the part (call it A) actually determining whether to carry out the kiss, causing the kiss not to occur. The dynamic in A which causes the kiss not to occur, is then reflected in B as a collapse in its virtual multiverse model of A.


Now, suppose that the timing of these two causal effects (from B to A and from A to B) is different. Suppose that the effect of B on A (of the model on the action) takes a while to happen (spanning several subjective moments), whereas the effect of A and B (of the action on the model) is nearly instantaneous (occurring within a single subjective moment). Then, another part of the brain, C, may record the fact that a collapse to definiteness in B’s virtual multiverse model of A, preceded an action in A. On the other hand, the other direction of causality, in which the action in A caused a collapse in B’s model of A, may be so fast that no other part of the brain notices that this was anything but simultaneous. In this case, various parts of the brain may gather the mistaken impression that virtual multiverse collapse causes actions; when in fact it’s the other way around. This, I conjecture, is the origin of our mistaken impression that we make “decisions” that cause our actions.



How does this relate to the current analysis in terms of hypersets?

The current analysis adds an extra dimension to the prior one, which has to do with what in the above quote is called the "second part" of the brain involved with the experience of will -- the "virtual multiverse modeler" component.

The extra dimension has to do with the ability of the virtual multiverse modeler to model itself and its own activity.

My previous theory discusses perceived causal implications between actions taken by one part of the brain, and models of the consequences of these actions occurring in another part (the virtual multiverse modeler). It notes that sometimes the mind makes mistakes in perceiving a causal implication between a collapse in the virtual multiverse model and an action, when a more careful understanding of the mental dynamics would reveal a more powerful causal implication in the other direction. There is much evidence for this in the neuropsychology literature, some of which is reviewed in my previous article.

The new ingredient added by the present discussion is an understanding that the virtual multiverse modeler can model its own activity and its relationship with the execution of actions. Specifically, the virtual multiverse modeler can carry out modeling in terms of an intuitive notion of "will" that may be formalized as I described above;


"S wills X" is defined as: The declarative content that {"S wills X" causally implies "S does X"}



where "S" refers specifically to the virtual multiverse modeler component, the nexus of the feeling of will.

And, as noted in my prior essay, it may do so whether or not this causal implication would hold up when the dynamics involved were examined at a finer level of granularity.

Who Cares?

Well, now, that's a whole other question, isn't it....

Personally, I find it interesting to progressively move toward a greater and greater understanding of the processes that occur in my own mind everyday. Since understanding (long ago) that the classical notion of "free will" is full of confusions, I've struggled to figure out the right attitude to take in my own mind, toward decisions that come up in my own life.

Day after day, hour after hour, minute after minute, I'm faced with deciding between option A and option B -- yet how seriously can I take this decision process if I know I have no real will anyway?

But the way I try to think about it is as follows: Within the descriptive language in which my reflective consciousness exists, my will does exist. It may not exist within the descriptive language of physics, but that's OK. None of these descriptive languages has an absolute reality. But, each of these descriptive languages can usefully help us understand the others (as well as helping us to understand the world directly); and having an understanding of the systematic biases made by the virtual multiverse modeler in my brain has certainly been useful to me. It has given me a lot more respect for the underlying unconscious dynamics governing my decisions, and this I believe has helped me to learn to make better decisions.

In terms of my AI work, the main implication of the train of thought reported here is that in order to experience reflective consciousness and will, an AI system needs to possess an informal internal language allowing the expression of basic hyperset constructs. Of course, in different AI designs this could be achieved in different ways, for instance it could be explicitly wired into the formalism of a logic-based AI system, or it could emerge spontaneously from the dynamics of a neural net based AI system. In a recent paper I explored some hypothetical means via which a neural system could give rise to a neural module F that acts as a function taking F as an input; this sort of phenomenon could potentially serve as a substrate for an internal hyperset language in the brain.

There is lots left to explore and understand, of course. But my feeling is that reflective consciousness and will, as described here, are not really so much trickier than other mental phenomena like logical reasoning, language understanding and long-term memory organization. Hypersets are a different formalism than the ones typically used to model these other aspects of cognition, but ultimately they're not so complex or problematic.

Onward!

Thursday, February 14, 2008

Psi, Closed-Mindedness and Fear

Some of the followup (private) emails I've gotten in regard to my just-prior blog post on Damien Broderick's book on psi, have really boggled my mind.

These emails basically present arguments of two forms:

  1. You're nuts, don't you know all the psi experiments are fraud and experimental error, everyone knows that...
  2. Look, even if there's a tiny chance that some psi phenomena are real, you're a fool to damage your reputation by aligning yourself with the kooks who believe in it

What shocks me (though it shouldn't, as I've been around 41 years and seen a lot of human nature already) about arguments of the first form is the irrational degree of skepticism toward this subject, displayed by otherwise highly rational and reflective individuals.

It's not as though these people have read Damien's book or carefully studied the relevant literature. I would welcome debate with suitably informed skeptics. Rather, these people dismiss the experimental literature on psi based on hearsay, and don't consider it worth their while to spend the 3-10 hours (depending on individual reading speed) required to absorb a fairly straightforward nontechnical book on the subject, like Damien's.

What shocks me about arguments of the second form is how often they come from individuals who are publicly aligned with other extremely radical ideas. For instance a few Singularitarians have emailed me and warned me that me talking about psi is bad, because then people will think Singularitarians are kooks.

(Amusingly, one Singularitarian pointed out in their conversation with me that, to them, the best argument for the possibility of psi that they know of is the Simulation Argument, which contends that we probably live in a computer simulation. This is I suppose based on the idea that the laws of physics somehow rule out psi, which they don't; but anyway it's an odd argument because whether we live in a simulation or not, the laws of physics are merely a compact summary of our empirical observations of the world we see, and so if psi data are real, they need to be incorporated into our observation-set and accounted for in our theories, regardless of whether we interpret these theories as being about a "real" world or a "simulated" one.)

Whoa!! So psi is so far out there that people who believe the universe is a simulation and the Singularity is near don't want their reputations poisoned by association with it?

This really baffles me.

I have no personal axe to grind regarding psi.

I have never had any unambiguous, personally convincing psi experiences (except when under the influence of various psychotropic compounds, but that's a whole other story ;-)....

I don't actually care much whether psi is real or not.

About psi and physics ... I am skeptical of attempts to explain psi based on quantum theory, due to not understanding how decoherence would be avoided in the hypothesized long-range quantum nonlocal binding between brains and other systems; but I recognize that quantum theory as such does not actually rule out psi. And, I am acutely aware that modern physics theories are incomplete, even leaving out psi data -- just taking into account well-accepted physics data. Modern physics does not provide a complete, conceptually consistent accounting of all well-accepted physics data. So all in all, our incomplete physics model doesn't rule out psi but makes it hard to explain. This does not seem a strong enough reason to ignore the available psi data on theoretical-physics grounds.

My observation is merely that, after spending a few dozen hours perusing the available data, it seems fascinating and compelling. Ed May's data is not the only good data out there by any means, but it's a great place to start if you want to dig into it.

I do not think we, as a community of thinking and understanding minds, should be ignoring all this high-quality data collected by serious, intelligent, careful scientists.

What is the reason for ignoring it? Presumably the reason is that a bunch of bullshit about psi has been promoted by a bunch of flakes and kooks. It's true. I admit it, Damien admits it, it's obvious. Let's get over that historical and cultural reality and look at the actual data -- quite possibly there's something to be learned from it. I don't know exactly what, but that's how science works -- you investigate and then you find out. What's frustrating is that in this extremely fascinating, important, potentially highly impactful area, research is proceeding so slowly because of excesses of skepticism and fear in the scientific community.

Scientists want to preserve their careers and reputations, so going out on a limb for something perceived as wacky is something very few of them are willing to do. As a consequence our understanding of the universe advances much more slowly than it otherwise could.

Finally, a brief aside.... For those who believe a Singularity is likely but who are highly skeptical of psi (a small percentage of the world, but disproportionately represented in the readership of this blog, I would imagine), I ask you this: Wouldn't it be nice to understand the universe a little better before launching a Singularity? If psi is real that would seem to have various serious implications for what superhuman AI's may be like post-Singularity, for example.

Well, anyway. I'm going to drop this topic for now as I have other stuff to focus on, like building AGI.... And I've been (finally) mixing down some of my music from MIDI to MP3; I'll post some on my website within the next month or so.... I don't have time to push ahead psi research myself nor to actively advocate for funding for those doing the research; but by writing these blog posts and reviewing Damien's book on Amazon.com, I've tried to do what I can (within my limited available time) to nudge the world toward being less closed-minded and less fearful in this regard.

Come on, people! Really! Have some guts and some mental-openness -- it's a big, weird, mysterious world out there, and I'm damn sure we understand only a teensy weensy bit of it. Experience gives us clues, empirical science gives us clues -- and the extent to which we manage to ignore some of the most interesting clues the world provides us, is pretty disappointing...

Saturday, February 02, 2008

The Scientific Evidence for Psi (is most likely stronger than you think)

My goal in this blog is to convince you to read Damien Broderick's book Outside the Gates of Science: Why It's Time for the Paranormal to Come in From the Cold.

Reviewing a host of research done by others over many decades, the book makes a remarkably and excitingly strong case that psi phenomena are worthy of intensive further investigation....

Let me explain why I'm so excited by Broderick's work.

Having grown up on SF, and being a generally open-minded person but also mathematician/scientist with a strong rationalist and empiricist bent, I've never quite known what to make of psi. (Following Broderick, I'm using "psi" as an umbrella term for ESP, precognition, psychokinesis, and the familiar array of suspects...).

Broderick's book is the first I've read that rationally, scientifically, even-handedly and maturely, reviews what it makes sense to think about psi given the available evidence.

(A quick word on my science background, for those who don't know me and may be new to this blog: I have a math PhD and although my main research areas are AI and cognitive science, I've also spent a lot of time working on empirical biological science as a data analyst. I was a professor for a 8 years but have been doing research in the software industry for the last decade.)

My basic attitude on psi has always been curious but ambivalent. One way to summarize it would be via the following three points....

First: Psi seems, on the face of it, is not wildly scientifically implausible after the fashion of, say, perpetual motion machines built out of wheels and pulleys and spinning chanbers filled with ball bearings. Science, at this point, understands the world only very approximately, and there is plenty of room in our current understanding of the physical universe for psi. Quantum theory's notions of nonlocality and resonance are conceptually somewhat harmonious with some aspects of psi, but that's not the main point. The main point is that science does not rule out psi, in the sense that it rules out various sorts of crackpottery.

Second
: Anecdotal evidence for psi is so strong and so prevalent that it's hard to ignore. Yes, people can lie, and they can also be very good at fooling themselves. But the number of serious, self-reflective intelligent people to report various sorts of psi experiences is not something that should be glibly ignored.

Third: There is by now a long history of empirical laboratory work on psi, with results that are complex, perplexing, but in many ways so apparently statistically significant as to indicate that SOMETHING important is almost surely going on in these psi experiments...

Broderick, also being an open-minded rationalist/empiricist, seems to have started out his investigation of psi, as reported in his book, with the same basic intuition as I've described in the above three points. And he covers all three of these points in the book, but the main service he provides is to very carefully address my third point above: the scientific evidence.

His discussion of possible physical mechanisms of psi is competent but not all that complete or imaginative; and he wisely shies away from an extensive treatment of anecdotal evidence (this stuff has been discussed ad nauseum elsewhere). But his treatment of the scientific literature regarding psi is careful, masterful and compellingly presented. And this is no small achievement.

The scientific psi literature is large, complex, multifaceted and subtle -- and in spite of a lifelong peripheral fascination with psi, I have never taken the time to go through all that much it myself. I'm too busy doing other sorts of scientific, mathematical and engineering work. Broderick has read the literature, sifted out the good from the bad, summarized the most important statistical and conceptual results, and presented his conclusions in ordinary English that anyone with a strong high school education should be able to understand.

His reviews of the work on remote viewing and precognition I found particularly fascinating, and convincing. It is hard to see how any fair-minded reader could come away from his treatments of these topics without at least a sharp pang of curiousity regarding what might actually be going on.

Perhaps my most valued discovery, based on Broderick's book, was Edwin May's work on precognition and related phenomena. Anyone with a science background is strongly encouraged to inspect the website of May's Cognitive Sciences Laboratory, which hosts an impressive collection of papers on his team's government-funded psi research.

What is my conclusion about psi after reading Damien's book, and exploring in more depth the work of May's team and others?

Still not definitive -- and indeed, Broderick's own attitude as expressed in the book is not definitive.

I still can't feel absolutely certain whether psi is a real phenomenon; or whether the clearly statistically significant patterns observed across the body of psi experiments bespeak some deep oddities in the scientific method and the statistical paradigm that we don't currently understand.

But after reading Broderick's book, I am much more firmly convinced than before that psi phenomena are worthy of intensive, amply-funded scientific exploration. Psi should not be a fringe topic, it should be a core area of scientific investigation, up there with, say, unified physics, molecular biology, AI and so on and so forth.

Read the book for yourself, and if you're not hopelessly biased in your thinking, I suspect you'll come to a conclusion somewhat similar to mine.

As a bonus, as well as providing a profound intellectual and cultural service, the book is a lot of fun to read, due to Broderick's erudite literary writing style and ironic sense of humor.

My worry -- and I hope it doesn't eventuate -- is that the book is just too far ahead of its time. I wonder if the world is ready for a rational, scientific, even-handed treatment of psi phenomena.

Clearly, Broderick's book is too scientific and even-handed for die-hard psi believers; and too psi-friendly (though in a level-headed, evidence-based way) for the skeptical crowd. My hope is that it will find a market among those who are committed to really understanding the world, apart from the psychological pathologies of dogmatism or excessive skepticism.

I note that Broderick has a history of being ahead of his time as a nonfiction writer. His 1997 book "The Spike" put forth basically the same ideas that Ray Kurzweil later promulgated in his 2005 book "The Singularity Is near." Kurzweil's book is a very good one, but so was Broderick's; yet Kurzweil's got copious media attention whereas Broderick's did not ... for multiple reasons, one of which, however, was simply timing. The world in 1997 wasn't ready to hear about the Singularity. The world in 2006 is.

The question is: is the world in 2008 ready to absorb the complex, fascinating reality of psi research? If so, Broderick's book should strike a powerful chord. It certainly did for me.

Friday, January 25, 2008

Yverse: A New Model of the Universe


A new model of the universe?

Actually, yeah.

It starts out with the familiar concept of the "multiverse," which is mainly associated with the many-universes interpretation of quantum theory.

According to one verbalization of the multiversal interpretation of quantum theory, every time a quantum-random "choice" is made (say, an electron spins up instead of down), there is a "branching" into two possible universes: one where the electron spins up, another where it spins down.

Similarly, if a bus drives at you while you're walking across the street, there may be two possible universes ahead of you: one where you get flattened, and another where you don't. (Actually, there are a lot of other choices going on in your life too, so it's more accurate to say there is one set of universes where you get flattened and another where you don't).

The collection of all these possible universes is known as the "multiverse."

In fact the language of "choice" used in the above description of the multiverse is a bit suspect. It's more accurate to say that corresponding to each possible state of the electron (up/down) once it is coupled with the external environment (so that it decoheres), there is a set of branches of the multiverse, and leave the ambiguous and misleading language of "choice" out of it.

Anyway, the multiverse is fascinating enough, but it's just the beginning.

It's easy enough to think of multiple possible multiverses. After all, there could be a multiverse in which Ben Goertzel never existed at all, in any of its branches.

One way to think about backwards time travel, for instance, is as a mechanism for selecting between multiverses. If you go back in time and change something, then you're effectively departing your original multiverse and entering a new one.

So, we can think about a multi-multiverse, i.e. a collection of multiverses, with a certain probability distribution over them.

I don't posit this hypothesis all that seriously, but I'm going to throw it out there anyway: It seems possible to conceive of consciousness as a faculty that facilitates movement between multiverses!

Well, I guess you can see where all this is going.

If there's a multi-multiverse, there can also be a multi-multi-multiverse. And so on.

But that is not all -- oh no, that is not all ;-)

What about the multi-multi-...-multi-multiverse?

I.e. the entity Yverse so that

Yverse = multi-Yverse

??

Math wonks will have already inferred that I chose the name Yverse because of the Y-combinator in combinatory logic, which is defined via

Yf = f(Yf)

In other words

Yf = ...ffff...

(where the ... goes on infinitely many times)

So the Yverse is the (Y multi-) universe ...

In the Yverse, there are multiple branches, each one of which is itself a Yverse....

Two Yverses may have two kinds of relationship: sibling (two branches of the same parent Yverse) or parent-child.

Backwards time travel may jolt you from one Yverse to a parent Yverse. Ordinary quantum decoherence events merely correspond to differences between sibling Yverses.

If there is a probability distribution across a set of sibling Yverses, it may be conceived as an infinite-order probability distribution. (A first-order probability distribution is a distribution across some ordinary things like numbers or particles, or universes. A second-order probability distribution is a distribution across a set of first-order probability distributions. Well, you get the picture.... An infinite-order probability distribution is a probability distribution over a set of infinite-order probability distributions. I've worked out some of the math of this kind of probability distribution, and it seems to make sense.)

What use is the Yverse model? I'm not really sure.

It seems to be an interesting way to think about things, though.

If I had more time for pure intellectual entertainment, I'd put some effort into developing a variant of quantum theory based on Yverses and infinite-order probabilities. It seems a notion worth exploring, especially given work by Saul Youssef and others showing that the laws of quantum theory emerge fairly naturally from the laws of probability theory, with a few extra assumptions (for instance, in Youssef's work, the assumption that probabilities are complex rather than real numbers).

And reading Damien Broderick's excellent book on psi, "Outside the Gates of Science," got me thinking a bit about what kinds of models of the universe might be useful for explaining psi phenomena.

Yes, quantum theory is in principle generally compatible with psi, so one doesn't need wacky ideas like Yverses to cope with psi, but it's fun to speculate. It seems to me that for quantum theory to account for psi phenomena would require some really far-out long-range quantum-coherence to exist in the universe, which doesn't seem to be there. So in my view it's at least sensible to speculate about how post-quantum physics might account for psi more sensibly.

This babbling about psi leads back to my wacko speculation above that consciousness could be associated with action in the multi-multiverse. In the Yverse model, the idea becomes that consciousness could be associated with action in the parent Yverse.

Could the difference between physical action and mental action be that the former has to do with movement between sibling Yverses, whereas the latter has to do with movement between parent and child Yverses?

Well I'll leave you on that note --

I've gone pretty far "out there", I guess about as far as it's possible to go ;-> ....

(Unless I could work Elvis into the picture somehow. I thought about it, but didn't come up with anything....)


-- (semi-relevant, rambling) P.S. Those who are interested in my AI work may be interested to know that I don't consider any of these funky speculations contradictory to the idea of creating AI on digital computers. The whole connection between probability, complex probability, quantum theory, determinism and complexity fascinates me -- and I consider it extremely poorly understood. For example, I find the whole notion of "determinism" in very complex systems suspect ... in what sense is a digital computer program determinate relative to me, if I lack the computational capability to understand its state or predict what it will do? If I lack the computational capability to understand some thing X, then relative to my own world-view, should X be modeled according to complex rather than real probabilities, in the vein of Yousseffian quantum probability theory? I suspect so. But I won't pursue this any more here -- I'll leave it for a later blog post. Suffice to say, for now, that I have a feeling that our vocabulary for describing complex systems, with words like "determinate" and "random", is woefully inaccurate and doesn't express the really relevant distinctions.

Saturday, January 19, 2008

Japanese Gods Pray for a Positive Singularity


In September 2007 I went on a two week business/science trip to China (Wuhan and Beijing) and Japan (Tokyo). In between some very interesting and productive meetings, I had a bit of free time, and so among other things I wound up formally submitting a prayer to the Japanese gods for a rapid, beneficial technological Singularity. Let's hope they were listening!

I wrote this blog post on the flight home but wasn't in a silly enough mood to post it till now.

(Scroll to the bottom if you're in a hurry; after all the irrelevant rambling beforehand, there's a sort of punchline there, involving the mysterious inscription in the above picture.)

My trip started in Wuhan, where I gave two talks at an AI conference and visited with Hugo de Garis and his students (his apprentice "brain builders"). Their near-term goal is to use genetic algorithms running on field-programmable gate arrays to control a funky little robot.

China was probably the most fascinating place I've ever visited (and I've visited and lived a lot of places), though in this brief trip I hardly got to know it at all. Society there is Westernizing fast (I've never seen anywhere more capitalist than modern China), but, there are still incredibly deep and dramatic differences between the Chinese and Western ways of thinking and living. As soon as I stepped into the airport, I was struck by the collectivist nature of their culture ...


... so very different from my own upbringing in which individuality was always held out as one of the highest values (I remember a book my mother got me as a young child, entitled Dare to Be Different -- a sort of history of famous nonconformists). There are of course many Chinese nonconformists (there are so many Chinese, there are many Chinese everything!), but in so many ways their whole society and culture is based on placing the group above the individual. (Which leads, among other things, to their enthusiasm for importing individualist Western scientists like Hugo de Garis.... But this is a topic for another blog post, some other day ... let me get on with my little story....)

Wuhan was a fascinating slice of "old China", with folks sitting out on the streets cooking weird food in woks, strange old men looking like they lived in 500 BC, and everywhere people, people, people. Alas I forgot to take pictures during my walks through the streets there.

Beijing by comparison was not too interesting -- too much like a modern Western city, but with terrible, yellow, reeking air. But the Great Wall, a bit north of Beijing, was really an amazing place. Too bad you aren't allowed to hike its full distance.



While hiking along the Great Wall, I asked for a sign from the Chinese gods that a positive Singularity was truly near. As if in some kind of response, a sudden gust of wind came up at that point...


I thought maybe the local gods would look more favorably on me if I ate some of the local cuisine, so I filled up on donkey, whole bullfrog, sea cucumber, duck's blood and pig foot fur and so forth. Not so bad as it sounds, but I still preferred the kung pao chicken.







(As well as consuming various recondite foodstuff items, in Beijing I visited the offices of HiPiHi.com, a very exciting Chinese virtual-worlds company ... but that's another story for another time....)

Next, I moved on to Tokyo (after some inordinately unpleasant logistical experiences in Beijing Capital airport, which I'd rather not revisit even in memory). The company I was visiting there was based in Shibuya, a suitably colorful and hypermodern Tokyo neighborhood:




Based on years of looking over my sons' shoulders as they watch anime', I expected all the Japanese people to look like these statues near Shibuya station:




In fact, some of the people I saw weren't so far off:





But more of them looked like this:


The Japanese love robots and cyborgs, and many of them seem to exhibit this love via making their own human selves as robotic as possible -- which is fascinating but odd, from my aging-American-hippy perspective. (I badly want to go beyond the human forms of body and mind, but I suppose that once this becomes possible, the result won't be much like contemporary machines -- rather it'll be something more fluid and flexible and creative than rigid old humanity.)

Toward the end of my stay, I got fed up with the hypermodernity, and I visited an old-time shrine in a beautiful park...


where I happened upon an intriguing site where Japanese go to submit prayers to the gods.

Each prayer is written down on a little piece of wood (which you buy for five dollars), then placed on a special prayer rack with all the others. The gods then presumably sort through them all (maybe with secretarial help from demigods or some such -- I didn't ask for the details), and decide which ones are worth granting, based on whatever godly criteria they utilize.


At first, the very concept caused the sea cucumber, duck's blood and twice-cooked donkey I'd eaten a few days before, much of which was still lingering in my stomach enjoying itself, to surge up through my gastrointestinal tract in a kind of disturbingly pleasing psychedelic can-can dance....


My next reaction was curiosity regarding what everyone else had prayed for. Sure, I could sorta guess, but it would have been nice to know in detail. But as the prayers were nearly all in Japanese, I couldn't really tell what they were all about, though a few gave small clues:


In the end, not wanting to be left out, I plunked down some yen to buy a little piece of wood and submitted my own prayer to the Japanese gods, to be considered along with the multitude of other human wants and needs. Hopefully the Japanese gods were in a generous mood that day -- for all our sakes!



Sunday, January 06, 2008

Nincompoopic Neurons, Global Brains and the Potential Sociological Applications of Adaptive Stochastic Resonance

My immediately previous blog post, on the apparently in-large-part nincompoopic nature of the emerging global brain

http://www.goertzel.org/blog/2007/12/global-moron-awakens.html


attracted so many comments (largely on various mailing lists I posted the blog URL to), that I figured I'd post a brief response here, expanding on some of the ideas in the responses and connecting them with some ideas from dynamical systems theory.

Most of the feedback I got was in the general vein of a blog post I wrote a couple months earlier, entitled "On Becoming a Neuron":

http://www.goertzel.org/blog/2007/10/on-becoming-neuron.html

The theme of "Becoming a Neuron" was how dependent we are, these days, on the global communication network and the emerging human group mind.

The theme of "The Global Nincompoop Awakens" was how many of the communications between the "human neurons" comprising the global brain seem completely idiotic in nature.

Reading through the comments on the Global Nincompoop post, I was struck by the theme of Bart Kosko' book Noise

http://www.amazon.com/Noise-Bart-Kosko/dp/0670034959


(a somewhat erratic book, but containing some very interesting ideas). Among other topics he reviews the way the brain's self-organizing cognitive dynamics depend on the high level of noise present in the brain, introducing the general notion of "adaptive stochastic resonance", according to which

Noise can amplify a faint signal in some feedback nonlinear systems even though too much noise can swamp the signal. This implies that a system’s optimal noise level need not be zero

(Google or Wikipedia "adaptive stochastic resonance" for a load of technical papers on the topic, by Kosko and others).

An interesting illustration of this phenomenon is the following figure from Kosko's paper



This picture shows nicely how, in the context of the human perceptual system, adding noise can help make patterns more perceptible.

(What's happening in the picture is that he's adding noise to the pixels in the picture, then applying a threshold rule to decide which pixels are black enough to display. Without enough noise, not enough pixels meet the threshold; with too much noise, too many pixels randomly meet the threshold. But it's worth letting a bunch of pixels randomly meet the threshold, in order to cause ENOUGH pixels to meet the threshold. So to optimize perception by a threshold-based system, you want to have an amount of noise lying in a certain interval -- not too little nor too much.)

Now, Kosko verges on insinuating that this kind of exploitation of noise is somehow a NECESSARY property of intelligent systems, which I doubt. However, it seems plausible that he's right about its role in the human brain and human perception/cognition.

Semi-relatedly, I recall reading somewhere that motion-sensing neurons in the brain are, on average, off by around 80 degrees in their assessment of the direction of motion of a percept at a certain point in the visual field. But we can still assess the direction of motion of an object fairly accurately, because our brains perform averaging, and the noisy data gets washed out in the average.

In other words, brains contain a lot of noise, and they contain mechanisms for working around this fact (e.g. averaging) and creatively exploiting it (e.g. adaptive stochastic resonance).

Now, it's not too surprising if the emerging Global Brain of humanity is more like a brain than like a well-engineered computer program. In other words: most of what goes on in the global brain, like most of what goes on in the human brain, is likely to be noise ... and there are likely to be mechanisms for both working around the noise, and exploiting it.

This brings up the interesting question of what techniques may exist in sociological dynamics for exploiting noise.

How might adaptive stochastic resonance, for example, play a role in sociodynamics? Could it be that the random noise of nincompoopic social interactions serve to make significant sociodynamic patterns stand out more clearly to our minds, thus actually enhancing the ability of the Global Brain to recognize patterns in itself?

I wonder how one would make an experiment to demonstrate or refute this? It would of course be difficult due to the massive number of confounding factors in any social system, and the difficulty of defining things like pattern and noise in the social domain as precisely as is possible in a domain like image processing (where of course these terms are still susceptible to a variety of interpretations).

And surely this simple idea -- obtained by extrapolating Kosko's image-processing example to the sociological domain -- is not the only possible way that social systems could profitably exploit their intrinsic noisiness.

But still, it's an intriguing train of thought....

(P.S. The question of whether this kind of chaotic, noisy, self-organizing system is remotely the best way to carry out creative computation is a whole other question, of course. My own strong suspicion is that human brains are incredibly inefficient at using their computational power, compared to other sorts of intelligent systems that will exist in the future; and the Global Brain likely shares this inefficiency, for similar reasons. However, this inefficiency is partially compensated for in both cases by biological systems' (neurons' and humans') prodigious capability for replication....)

Saturday, December 08, 2007

The Global Nincompoop Awakens

On a recent business trip to New York, I found myself sitting for a couple hours in a Starbucks in the midst of the campus of New York University (which is not a walled campus, but rather a collection of buildings strewn semi-haphazardly across a few blocks of Greenwich Village).

While sitting there typing into my laptop, I couldn't help being distracted by the conversations of the students around me. I attended NYU in the mid-80's (doing a bit of graduate study there on the way to my PhD), and I was curious to see how the zeitgest of the student body had changed.

Admittedly, this was a highly nonrepresentative sample, as I was observing only students who chose to hang out in Starbucks. (Most likely all the math and CS grad students were doing as I'd done during my time at NYU, and hanging out in the Courant Institute building, which was a lot quieter than any cafe' ...). And, the population of Starbucks seemed about 65% female, for whatever reason.

The first thing that struck me was the everpresence of technology. The students around me were constantly texting each other -- there was a lot of texting going on between people sitting in different parts of the Starbucks, or people waiting in line and other people sitting down, etc.

And, there was a lot of talk about Facebook. Pretty much anytime someone unfamiliar (to any of the conversation participants) was mentioned in conversation the question was asked "Are they on Facebook?" Of course, plenty of the students had laptops there and could write on each others Facebook walls while texting each other and slipping in the occasional voice phone call or email as well.

All in all I found the density and rapidity of information interchange extremely impressive. The whole social community of the Starbucks started to look like a multi-bodied meta-mind, with information zipping back and forth everywhere by various media. All the individuals comprising parts of the mind were obviously extremely well-attuned to the various component media and able to multiprocess very effectively, e.g. writing on someone's Facebook wall and then texting someone else while holding on an F2F conversation, all while holding a book in their lap and allegedly sort-of studying.

Exciting! The only problem was: The contents of what was being communicated was so amazingly trivial and petty it started to make me feel physically ill.

Pretty much all the electronic back-and-forth was about which guys were cute and might be interested in going to which party with which girls; or, how pathetic it was that a certain group of girls had "outgrown" a certain other group via being accepted into a certain sorority and developing a fuller and more mature appreciation for the compulsive consumption of alcohol ... and so forth.

Which led me to the following thought: Wow! With all our incredible communications technologies, we are creating a global brain! But 99.99% of this global brain's thoughts are going to be completely trite and idiotic.

Are we, perhaps, creating a global moron or at least a global nincompoop?

If taken seriously, this notion becomes a bit frightening.

Let's suppose that, at some point, the global communication network itself achieves some kind of spontaneous, self-organizing sentience.

(Yeah, this is a science-fictional hypothesis, and I don't think it's extremely likely to happen, but it's interesting to think about.)

Won't the contents of its mind somehow reflect the contents of the information being passed around the global communications network?

Say: porn, spam e-mails, endless chit-chat about whose buns are cuter, and so forth?

Won't the emergent global mind of the Internet thus inevitably be a shallow-minded, perverted and ridiculous dipshit?

Is this what we really want for the largest, most powerful mind on the planet?

What happens when this Global Moron asserts its powers over us? Will we all find our thoughts and behaviors subtly or forcibly directed by the Internet Overmind?? -- whose psyche is primarily directed by the contents of the Internet traffic from which it evolved ... which is primarily constituted of ... well... yecchh...


(OK .. fine ... this post is a joke... OR IS IT???)

Monday, October 29, 2007

On Becoming a Neuron

I was amused and delighted to read the following rather transhumanistic article in the New York Times recently.

http://www.nytimes.com/2007/10/26/opinion/26brooks.html?_r=1&oref=slogin

The writer, who does not appear to be a futurist or transhumanist or Singularitarian or anything like that, is observing the extent to which he has lost his autonomy and outsourced a variety of his cognitive functions to various devices with which he interacts. And he feels he has become stronger rather than weaker because of this -- and not any less of an individual.

This ties in deeply with the theme of the Global Brain

http://pespmc1.vub.ac.be/SUPORGLI.html

which is a concept dear to my heart ... I wrote about it extensively in my 2001 book "Creating Internet Intelligence" and (together with Francis Heylighen) co-organized the 2001 Global Brain 0 workshop in Brussels.

I have had similar thoughts to the above New York Times article many times recently... I can feel myself subjectively becoming far more part of the Global Brain than I was even 5 years ago, let alone 10...

As a prosaic example: Via making extensive use of task lists as described in the "Getting Things Done" methodology

http://en.wikipedia.org/wiki/Getting_Things_Done

I've externalized much of my medium-term memory about my work-life.

And via using Google Calendar extensively I have externalized my long-term memory... I use the calendar not only to record events but also to record information about what I should think about in the future (e.g. "Dec. 10 -- you should have time to start thinking about systems theory in connection to developmental psychology again...")

And, so much of my scientific work these days consists of reading little snippets of things that my colleagues on the Novamente project (or other intellectual collaborators) wrote, and then responding to them.... It's not that common these days that I undertake a large project myself, because I can always think of someone to collaborate with, and then the project becomes in significant part a matter of online back-and-forth....

And the process of doing computer science research is so different now than it was a decade or two ago, due to the ready availability and easy findability of so many research ideas, algorithms, code snippets etc. produced by other people.

Does this mean that I'm no longer an individual? It's certainly different than if I were sitting on a mountain for 10 years with my eagle and my lion like Nietzsche's Zarathustra.

But yet I don't feel like I've lost my distinctiveness and become somehow homogenized --
the way I interface with the synergetic network of machines and people is unique in complexly patterned ways, and constitutes my individuality.

Just as a neuron in the brain does not particularly manifest its individuality any less than a neuron floating by itself in a solution. In fact, the neuron in the brain may manifest its
individuality more greatly, due to having a richer, more complex variety of stimuli to which it may respond individually.

None of these observations are at all surprising from a Global Brain theory perspective. But, they're significant as real-time, subjectively-perceived and objectively-observed inklings of the accelerating emergence of a more and more powerful and coordinated Global Brain, of which we are parts.

And I think this ties in with Ray Kurzweil's point that by the time we have human-level AGI, it may not be "us versus them", it may be a case where it's impossible to draw the line between us and them...

-- Ben

P.S.

As a post-script, I think it's interesting to tie this Global Brain meme in with the possibility of a "controlled ascent" approach to the Singularity and the advent of the transhuman condition.

Looking forward to the stage at which we've created human-leve AGI's -- if these AGI's become smarter and smarter at an intentionally-controlled rate (say a factor of 1.2 per year, just to throw a number out there), and if humans are intimately interlinked with these AGI's in a Global Brain like fashion (as does seem to be occurring, at an accelerating rate), then we have a quite interesting scenario.

Of course I realize that guaranteeing this sort of controlled ascent is a hard problem. And I realize there are ethical issues involved in making sure a controlled ascent like this respects the rights of individuals who choose not to ascend at all. And I realize that those who want to ascend faster may get irritated at the slow pace. All these points need addressing in great detail by an informed and intelligent and relevantly educated community, but they aren't my point right now -- my point in this postcript is the synergetic interrelation of the Global Brain meme with the controlled-ascent meme.

The synergy here is that as the global brain gets smarter and smarter, and we get more and more richly integrated into it, and the AGI's that will increasingly drive the development of the global brain get smarter and smarter -- there is a possibility that we will become more and more richly integrated with a greater whole, while at the same time having greater capability to exercise our uniqueness and individually.

O Brave New Meta-mind, etc. etc. ;-)

Friday, June 15, 2007

The Pigeons of Paraguay (Further Dreams of a Ridiculous Man)

In the spirit of my prior dream-description Colors, I have written down another dream ... one I had last night ... it's in the PDF file linked to from

Copy Girl and the Pigeons of Paraguay


I'm not sure why I felt inspired to, but as soon as I woke up from the dream I had the urge to type it in (along with some prefatory and interspersed rambling!) It really wasn't a terribly important dream for me ... but it was interesting as an example of a dream containing a highly realistic psychedelic drug trip inside it. There is also a clear reference to the "Colors" dream within this one, which is not surprising -- my dreams all tend to link into each other, as if they form their own connected universe, separate from and parallel to this one.

I have always enjoyed writing "dreamlike" fiction, such as my freaky semi-anti-novel Echoes of the Great Farewell ... but lately I've become interested in going straight to the source, and naturalistically recording dreams themselves ... real dreams being distinctly and clearly different from dreamlike fiction. Real dreams have more ordinariness about them, more embarrassing boringness and cliche'-ness; and also more herky-jerky discoordination.... They are not as aesthetic, which of course gives them their own special aesthetic value (on a meta-aesthetic level, blah blah blah...). Their plain-ness and lack of pretension gives them, in some ways, a deeper feel of truth than their more poetized fictional cousins....

The dream I present here has no particular scientific or philosophical value, it's just a dream that amused me. It reminded me toward the end a bit of Dostoevsky's Dream of a Ridiculous Man -- not in any details, but because of (how to put it???) the weird combination of irony and sincerity with which the psychic theme of sympathy and the oneness of humankind is addressed. Yeah yeah yeah. Paraguayan pigeons!! A billion blue blistering barnacles in a thundering typhoon!!!

I'll give you some mathematics in my next blog entry ;-)

-- Ben

Saturday, June 02, 2007

Is Google Secretly Creating an AGI? (Reasons Why I Doubt It)

From time to time someone suggests to me that Google "must be" developing a powerful Artificial General Intelligence in-house. I recently had the opportunity to visit Google and chat with some of their research staff, including Peter Norvig, their Director of Research. So I thought I'd share my perspective on Google+AGI based on the knowledge currently at my disposal.

First let me say that I definitely see where the Google+AGI speculation comes from. It's not just that they've hired a bunch of AI PhD's and have a lot of money and computers. It's that their business leaders have taken to waxing eloquent about the glorious future of artificial intelligence. For instance, on the blog

http://memepunks.blogspot.com/2006/05/google-ai-twinkle-in-larry-pages-eye.html


we find some quotes from Google co-founder Larry Page:

"People always make the assumption that we're done with search. That's very far from the case. We're probably only 5 percent of the way there. We want to create the ultimate search engine that can understand anything ... some people could call that artificial intelligence.

...

a lot of our systems already use learning techniques


...

The ultimate search engine would understand everything in the world. It would understand everything that you asked it and give you back the exact right thing instantly ...
You could ask 'what should I ask Larry?' and it would tell you."

Page, in the same talk quoted there, noted that technology has a tendency to change faster than expected, and that an AI could be a reality in just a few years.

Exciting rhetoric indeed!

Anyway, earlier this week I gave a talk at Google, to a group of in-house researchers and engineers, on the topic of artificial general intelligence. I was rather overtired and sick when I gave the talk, so it wasn't anywhere near one of my best talks on AGI and Novamente. Blecch. Parts of it were well delivered; but I didn't pace myself as well as usual, so I wound up rushing past some of the interesting points and not giving my usual stirring conclusion.... But some of the younger staff were pretty interested anyway; and there were some fun follow-up conversations.

Peter Norvig (their Director of Research), an all-around great researcher and writer and great guy, gave the intro to my talk. I had chatted with Peter a bit earlier; and had mentioned to him that some folks I knew in the AGI community suspected Google to have a top-secret AGI project.

So anyway, Peter gave the following intro to my talk [I am paraphrasing here, not quoting exactly ... but I've tried to stay true to what he said, as accurately as possible given the constraints of my all-too-human memory]:

"There has been some talk about whether Google has a top-secret project aimed at building a thinking machine. Well, I'll tell you what happened. Larry Page came to me and said 'Peter, I've been hearing a lot about this Strong AI stuff. Shouldn't we be doing something in that direction?' So I said, okay. I went back to my desk and logged into our project management software. I had to write some scripts to modify it because it didn't go far enough into the future. But I modified it so that I could put, 'Human-level intelligence' on the row of the planning spreadsheet corresponding to the year 2030. And, that wasn't up there an hour before someone else added another item to the spreadsheet, time-stamped 90 days after that: 'Human-level intelligence: Macintosh port' "

Well ... soooo ... apparently Norvig, at least in a semi-serious tongue-in-cheek moment, thinks we're about 23 years from being able to create a thinking machine....

He may be right of course -- or he may even be over-optimistic, who knows -- but a cynical side of me can't help thinking: "Hey, Ben! Peter Norvig is even older than you are! Maybe placing the end goal 23 years off is just a way of saying 'Somebody else's problem!'."

Norvig says he views himself as building useful tools that will accelerate the work of future AGI researchers, along with everyone else....

Of course, I do appreciate Google's useful tools! Google's various tools have been quite a relief as compared to the incompetently-architected, user-unfriendly software released by some other major software firms.

And, while from a societal perspective I wish Google would put their $$ and hardware behind AGI; from the perspective of my small AGI business Novamente LLC, their current attitude is surely preferable...

[I could discourse a while about Google's ethics slogan "Don't Be Evil" as a philosophy of Friendly AI ... but I'll resist the urge...]

When I shared the above story with one of my AGI researcher friends (who shall here remain anonymous), he agreed with my sentiments, and shared the following story with me..

"In [month deleted] I had an interview in Google's new [location deleted] office
... and they were much more interested in my programming skill than in my research. Of course, we didn't find a match.

Even if Google wants to do AGI, given their current technical culture,
they won't get it right, at least at the beginning. As far as AGI is
concerned, Google has more than enough money and engineers, but less
than enough thinkers. They will produce some cute toolbox with smart
algorithms supported by a huge amount of raw data, which will be
interesting, but far from AGI."

Summing up ... as the above anecdotes suggest, my overall impression was that Google is not making any serious effort at AGI. If they are, then either

  • they have trained dozens of their scientific staff to be really good actors, or
  • it is a super-top-secret effort within Google Irkutsk or wherever, that the Google Mountain View research staff don't know about

Of course, neither of these is an impossibility -- "we don't know what we don't know," etc. But honestly, I rate both of those options as pretty unlikely.

Could they launch an AGI effort? Most surely: they could, at any point. The cost to them of doing so would be trivially small, relative to the overall resources at their disposal. Maybe this blog post will egg them into doing so! (yeah, right...)

But I think the point my above-quoted friend made, after his Google interview, was quite astute. Google's technical culture is coding-focused, and their approach to AI is data-focused (textual data, and data regarding clicks on ads, and geospatial data coming into Google Earth, etc.). To get hired at Google you have to be a great coder -- just being a great AGI theorist wouldn't be enough, for example. I don't think AGI is mainly a coding problem, nor mainly a data-based problem ... nor do I think it's a problem that can effectively be solved via a "great coding + lots of data" mentality. I think AGI is a deep conceptual problem that has more to do wth understanding cognition than with churning out great code and effectively utilizing masses of data. Of course, lots of great software engineering will be required to create an AGI (and we're happy to have a few super-engineers within Novamente LLC, for example), and lots of data too (e.g. in the Novamente case we plan to start our systems out with perceptual and social data from virtual worlds like Second Life; and then later on feed them knowledge from Wikipedia and other textual sources). But if the focus of an "AGI" team is on coding and data, rather than on grokking the essence of cognition, AGI is not going to be the result.

So, IMO, for Google to create an AGI would require them not only to bypass the relative AGI skepticism represented by the Peter Norvig story above -- but also to operate an AGI project based on a significantly different culture than the one that has worked for Google so far, in their development of (in some cases, really outstandingly useful) narrow-AI applications.

All in all my impression after getting to know Google's in-house research program a little better, is about the same as it was beforehand. However, I did make an explicit effort to look for evidence disconfirming my prior hypotheses -- and I didn't really find any. If anyone has evidence that the impressions I've given here are mistaken, I'd certainly be happy to hear it.

OK, well, it's time to wind up this blog post and get back to my own effort to create AGI -- with far less money and computers than Google, but -- at least -- a focus on (and, I believe, a clear understanding of) the essence of the problem....

Sure, it would be nice to have the resources of a Google or M$ or IBM backing up Novamente! But, the thing is, you don't become a big company like those by focusing on grokking the essence of cognition -- you become a big company like those by focusing on practical stuff that makes money quickly, like code and data and user interfaces ... and if AI plays a role in this, it's problem-specific narrow-AI, such as Google has done so well with.

As Larry Page recognizes, AGI will certainly have massive business value, due to its incredible potential for delivering useful services to people in a huge number of contexts. But the culture and mentality needed to create AGI seems to be different from the one needed to rapidly create a large and massively profitable company. My prediction is that if Google ever does get an AGI, they will buy it rather than build it.

Friday, May 25, 2007

Pure Silliness


Ode to the Perplexingness of the Multiverse


A clever chap, just twenty-nine
Found out how to go backwards in time
He went forty years back
Killed his mom with a whack
Then said "How can it be that still I'm?"

On the Dangers of Incautious Research and Development

A scientist, slightly insane
Created a robotic brain
But the brain, on completion
Favored assimilation
His final words: "Damn, what a pain!"

A couple clever followups to the above poem were posted by others on the Singularity email list...

On the Dangers of Emulating Biological Drives in Artificial Intelligences
(by Moshe Looks)

A scientist once shook his head
and exclaimed "My career is now dead;
for although my AI
has an IQ that's high
it insists it exists to be bred!"

By Derek Zahn:

The Provably Friendly AI
Was such a considerate guy!
Upon introspection
And careful reflection,
It shut itself off with a sigh.

And, less interestingly...

On the Benefits of Clarity in Verbal Presentation

There was a prize pig from Penn Station
Who refused to eschew obfuscation
The swine with whom he traveled
Were bedazed by his babble
So they baconed him, out of frustration

Sunday, May 20, 2007

Flogging Poor Searle Again

Someone emailed me recently about Searle's Chinese Room argument,

http://en.wikipedia.org/wiki/Chinese_room

a workhorse theme in the philosophy of AI that normally bores me to tears.

But though the Chinese room bores me, part of my reply to the guy's question wound up interesting me slightly so I thought I'd repeat it here.

I won't recapitulate the Chinese room argument here; if you don't know it please follow the above link to Wikipedia.

The issue I'll raise here ties in with the question of whether recent theoretical developments regarding "AI with massive amounts of processing power" have any relevance to pragmatic AI.

As an example of this sort of theoretical research, check out:

http://www.hutter1.net/

which describes among other things an AI system called AIXI that uses infinitely much computational resources and achieves a level of intelligence greater than or equal to that of any other possible AI system. There are also approximations to AIXI such as AIXItl that use only insanely rather than infinitely much computational resources.

My feeling is that one should think about, not just

Intelligence = complexity of goals that a system can achieve

but also

Efficient intelligence = Sum over goals a system can achieve of: (complexity of the goal)/(amount of space and time resources required to achieve the goal)

According to these definitions, AIXI has zero efficient intelligence, and AIXItl has extremely low efficient intelligence. The challenge of AI in the real world is in achieving efficient intelligence not just raw intelligence.

Also, according to these definitions, the Bekenstein bound places a limit on the maximal efficient intelligence of any system in the physical universe.

Now, back to the Chinese room (hmm, writing this blog post is making me hungry ... after I'm done typing it I'm going to head out for some Kung Pao chicken!!)....

A key point is: The scenario Searle describes is likely not physically possible, due to the unrealistically large size of the rulebook.

And even if Searle's scenario somehow comes out physically plausible (e.g. maybe Bekenstein is wrong due to currently unknown physics), it certainly involves systems totally unlike any that we have ever encountered. Our terms like "intelligence" and "understanding" and "mind" were not created for dealing with massive-computational-resources systems of this nature.

The structures that we associate with intelligence (will, focused awareness, etc.) in a human context, all come out of the need to do intelligent processing within modest space and time requirements.

So when someone says the feel like the {Searle+rulebook} system isn't really understanding Chinese, what they really mean (I argue) is: It isn't understanding Chinese according to the methods we are used to, which are methods adapted to deal with modest space and time resources.

This ties in with the relationship between intensity-of-consciousness and degree-of-intelligence.

(Note that I write about intensity of consciousness rather than presence of consciousness. I tend toward panpsychism but I do accept that "while all animals are conscious, some animals are more conscious than others" (to pervert Orwell). I have elaborated on this perspective considerably in my 2006 book The Hidden Pattern.)

In real life, these seem often to be tied together, because the cognitive structures that correlate with intensity of consciousness are useful ones for achieving intelligent behaviors.

However, Searle's scenario is pathological in the sense that it posits a system with a high degree of intelligence associated with a functionality (understanding Chinese) that is NOT associated with any intensity-of-consciousness.

But I suggest that this pathology is due to the unrealistically large amount of computing resources that the rulebook requires.

I.e., it is finitude of resources that causes intelligence and intensity-of-consciousness to be correlated. The fact that this correlation breaks in a pathological, physically-impossible case that requires dramatically much resources, doesn't mean too much...

What it means is that "understanding", as we understand it, has to do with structures and dynamics of mind that arise due to having to manifest efficient intelligence, not just intelligence.

That is really the moral of the Chinese room.

Tuesday, May 15, 2007

Technological versus Subjective Acceleration

This post is motivated by an ongoing argument with Phil Goetz, a local friend who believes that all this talk about "accelerating change" and approaching the Singularity is bullshit -- in part because he doesn't see things advancing all that amazingly exponentially rapidly around him.

There is plenty of room for debate about the statistics of accelerating change: clearly some things are advancing way faster than others. Computer chips and brain scanners are advancing more rapidly than forks or refrigerators. In this regard, I think, the key question is whether Singularity-enabling technologies are advancing exponentially (and I think enough of them are to make a critical difference). But that's not the point I want to get at here.

The point I want to make here is: I think it is important to distinguish technological accel eration from subjective acceleration.

This breaks down into a couple sub-points.

First: Already by this point in history, I suggest, advancement in technology has far outpaced the ability of the human brain to figure out new ways to make meaningful use of that technology.

Second: The human brain and body themselves pose limitations regarding how thoroughly we can make use of new technologies, in terms of transforming our subjective experience.

Because of these two points, a very high rate of technological acceleration may not lead to a comparably high rate of subjective acceleration. Which is, I think, the situation we are seeing at present.

Regarding the first point: Note that long ago in history, when new technology was created, it lasted quite a while before being obsoleted, so that each new technology was exploited pretty damn thoroughly before its successor came along.

These days, though, we've just BARELY begun figuring out how to creatively exploit X, when something way better than X comes along.

The example of music may serve to illustrate both of these points.

The invention of the electronic synthesizer/sampler keyboard was a hell of a breakthrough. However, the music we humans actually make has not changed nearly as much as the underlying technology has. By and large we use all this advanced technology to make stuff that sounds harmonically, rhythmically and melodically not that profoundly different from pre-synthesizer music. Certainly, the degree of musical change has not kept up with the degree of technological change: Madonna is not as different from James Brown as a synthesizer keyboard is from an electric guitar.

Why is that?

Well, humans take a while to adapt. People are still learning how to make optimal use of synthesizer/sampling keyboards for making intersting music ... but while people are still relatively early on that learning curve, technology has advanced yet further and computer music software gives us amazing new possibilities ... that we've barely begun to exploit...

Furthermore, our musical tastes are limited by our physiology. I could make fabulously complex music using a sequencer, with 1000's of intersecting melody lines carefully calculated, but no human would be able to understand it (I tried ;-). Maybe superhuman minds will be able to use modern music tech to create music far subtler and more interesting than any human music, for their own consumption.

And, even when acoustic and cognitive physiology isn't relevant, the rate of growth and change in a person's music appreciation is limited by their personality psychology.

To take another example, let's look at bioinformatics. No doubt that technology for measuring biological systems has advanced exponentially. As has technology for analyzing biological data using AI (my part of that story).

But, AI-based methods are very slow to pervade the biology community due to cultural and educational issues ... most biologist can barely deal with stats, let alone AI tech....

And, the most advanced measurement machinery is often not used in the most interesting possible ways. For instance, microarray devices allow biologists to take a whole-genome approach to studying biological systems, but, most biologists use them in a very limited manner, guided by an "archaic" single-gene-focused mentality. So much of the power of the technology is wasted. This situation is improving -- but it's improving at a slower pace than the technology itself.

Human adoption of the affordances of technology has become the main bottleneck, not the technology itself.

So there is a dislocation between the rate of technological acceleration and the rate of subjective acceleration. Both are fast but the former is faster.

Regarding word processing and Internet technology: our capability to record and disseminate knowledge has increased TREMENDOUSLY ... and, our capability to create knowledge worth recording and disseminating has increased a lot too, but not as much...

I think this will continue to be the case until the legacy human cognitive architecture itself is replaced with something cleverer such as an AI or a neuromodified human brain.

At that point, we'll have more flexible and adaptive minds, making better use of all the technologies we've invented plus the new ones they will invent, and embarking on a greater, deeper and richer variety of subjective experiences as well.

Viva la Singularity!

Thursday, February 01, 2007

Colors: A Recurring Dream

I took a couple hours and wrote down a recurring dream I've had for years, which is a sort of metaphor for transhumanism and the quest to create AGI...

http://goertzel.org/Colors.pdf

Friday, December 08, 2006

Polya's Inner Neanderthal

I remember reading, years ago, the excellent book "The Psychology of Mathematical Invention" by the mathematician Jacques Hadamard...

He surveyed a bunch of mathematicians, intending to find out how mathematicians think internally. Many mathematicians thought visually, it was found; some thought in terms of sounds, some purely abstractly.

But, George Polya was the only mathematician surveyed who claimed to think internally in terms of grunts and groans like "aaah", "urrghhh" , "hmtphghhghggg"....

At the time I read this, I thought it was very odd.

However, now I have just read Mithen's book ("The Singing Neanderthals", discussed in another, recent blog of mine) claiming that the language of Neanderthals and early Cro-magnons was like that: no words, just lengthy, semi-musical grunts and groans with varying intonation patterns....

So maybe Polya was just old-fashioned.... ;-)

Anyone else out there think in terms of grunts and groans and so forth? If so please contact me....

Wednesday, December 06, 2006

Updating Kazantzakis

I saw this quote in a friend's email signature...

"What a strange machine man is! You fill him with bread, wine, fish, and radishes, and out comes sighs, laughter, and dreams."
-- Nikos Kazantzakis (1885-1957), Greek novelist.


To which my immediate mental response was:

OK, fine -- but it's what happens when you feed him with hallucinogenic mushrooms, amphetamines, ginger beer, Viagra and stir-fried snails that really fascinates me!!

Saturday, December 02, 2006

Zebulon's Favorite Place

My son Zebulon (age 13) recently had to write a brief essay for school on "My Favorite Place," as part of a national essay competition. His essay was chosen to represent his school in the county-wide competition. His theme was an amusing one which may resonate with many readers of this blog -- the third paragraph particularly reminds me of myself on some of my more productive days! (But Zeb is not working on AI, he's working on animations, see zebradillo.com)





My Favorite Place
Zebulon Goertzel



I work my way past all the furniture in my cramped room and sit down at my chair. I see a computer, a laptop. On its screen are pixels. Tiny, stabbing rays of color that drill into my eyes and let me enjoy my computer to no end despite its hideous flaws. The monitor is marked and scarred due to various past and unknown misuses. The dull keyboard is with its regular layout, usable but without an S key. I look at the front disk drive and recall being told not to remove it.

Beside my laptop is my tablet. In its middle-left side is the pen, a gnawed-on, well-used device that is often lost and found in my pocket. The tablet cover is not without scratches, some deep, some light. Each scratch is from a scribble or drawing or line somebody drew. A bright wire links my tablet to the sloppy tangle of wires, connectors and cables which is usually behind my laptop.

My computer’s fan consistently buzz-whirs with high pitch. I am hypnotized as I slowly lean forward, as I grip my tablet pen with sore, almost numb fingers, as I click and click and click. My back is hunched and my neck is out. I work. My eyes ache, but I hardly notice. My stomach is empty, but I try to ignore it. I decide to be done. I get up, stretch, and go to care for myself. My favorite place is my computer, or my desk, because there are no limits to what a computer can do, and my computer fascinates me to no end.

The Cognitive Significance of Radiohead (aka, The Historical and Possibly Current Significance in the Human Mind of Patterns of Tonal Variation)

In one of those pleasant synchronicities, a couple days ago PJ Manney started a conversation with me about music and the scientific mind, at the same time as I received in the mail a book I had ordered a couple weeks ago, "The Singing Neanderthals," about the cognitive origins of music.

So, here I'll start with some personal notes and musings in the musicaloidal direction, and finally wander around to tying them in with cognitive theory...

I had told PJ I was a spare-time semi-amateur musician (improvising and composing on the electronic keyboard -- yeah, one of these days I'll put some recordings online; I keep meaning to but other priorities intervene) and she was curious about whether this had had any effect on my AI and other scientific work.

I mentioned to her that I often remember how Nietzsche considered his music improvisation necessary to his work as a philosopher. He kept promising himself to stop spending so much time on it, and once said something like "From now on, I will pursue music only insofar as it is domestically necessary to me as a philosopher."

This is a sentiment I have expressed to myself many times (my music keyboard being a tempting 10 feet away from my work desk...). Like Nietzsche, I have found a certain degree of musicological obsession "domestically necessary" to myself as a creative thinker.... The reasons for this are interesting to explore, although one can't draw definite conclusions based on available evidence....

When I get "stuck" thinking about something really hard, I often improvise on the piano. That way one of two things happens: either

1) my mind "loosens up" and I solve the problem

or

2) I fail to solve the problem, but then instead of being frustrated about it, I abandon the attempt for a while and enjoy myself playing music ;-)

Improvising allows one's music to follow one's patterns of thought, so the music one plays can sorta reflect the structure of the intellectual problem one is struggling with....

I drew on my experiences composing/improvising music when theorizing about creativity and its role in intelligence, and cooking up the aspects of the Novamente AGI design that pertain to flexible creativity....

As well as composing and improvising, I also listen to music a lot -- basically every kind of music except pop-crap and country -- most prototypically, various species of rock while in the car, and instrumental jazz/jazz-fusion when at home working ... [I like music with lyrics, but I can't listen to it while working, it's too distracting... brings me back too much to the **human** world, away from the world of data structures and algorithms and numbers!! ... the nice thing with instrumental music is how it captures abstract patterns of flow and change and interaction, so that even if the composer was thinking about his girlfriend's titties when he wrote the song, the abstract structures (including abstract **emotional** structures) in the music may feel (and genuinely **be**) applicable to something in the abstract theory of cognition ;-) ] ... but more important than that is the almost continual unconsciously-improvised "soundtrack" inside my head. It's as though I'm thinking to music about 40% of the time, but the music is generated by my brain as some kind of interpretation of the thoughts going on.... But yet when I try to take this internal music and turn it into **real music** at the keyboard, the translation process is of course difficult, and I find that much of the internal music must exist in some kind of "abstract sound space" and could never be fully realized by any actual sounds.... (These perverted human brains we are stuck with!!!)

Now, on to Mithen's book "The Singing Neanderthals," which makes a fascinating argument for the centrality of music in the evolution of human cognition.... (His book "The Prehistory of Mind" is really good as well, and probably more of an important work overall, though not as pertinent to this discussion...)

In brief he understands music as an instantiation and complexification of an archaic system of communication that was based (not on words but) on patterns of vocal tonal variation.

(This is not hard to hear in Radiohead, but in Bach it's a bit more sublimated ;=)

This ties in with the hypothesis of Sue Savage-Rumbaugh (who works with the genius bonobo Kanzi) that language likely emerged originally from protolanguages composed of **systems of tonal variation**.

Linguist Alison Wray has made related hypotheses: that protolanguage utterances were holistic, and got partitioned into words only later on. What Savage-Rumbaugh adds is that before protolanguage was partitioned into words, it was probably possessed of a deep, complex semantics of tonal variation. She argues this is why we don't recognize most of the existing language of animals: it's not discrete-word language but continuous-tonal-variation language.

(Funny that both these famous theorists of language-as-tonal-variation are women! I have sometimes been frustrated by my mom or wife judging my statements not by their contents but by the "tone" of delivery ;-)

This suggests that a nonhuman AI without a very humanlike body is never going to experience language anywhere near the same way as a human. Even written language is full of games of implied tonal variation-pattern; and in linguistics terms, this is probably key to how we select among the many possible parses of a complex sentence.

[Side note to computational linguists and pragmatic AI people: I agree the parse selection problem can potentially be solved via statistics, like Dekang Lin does in MiniPar; or via pure semantic understanding, as we do when reading Kant in translation, or anything else highly intellectual and non-tonal in nature.... But it is interesting to note that humans probably solve parse selection in significant part thru tonal pattern recognition....]

Regarding AI and language acquisition, this line of thinking is just a further justification of taking a somewhat nonhumanlike approach to protolanguage learning; as if this sort of theory is right, the humanlike approach is currently waaay inaccessible to AI's, even ones embodied in real or simulated robots... It will be quite a while until robot bodies support deep cognitive/emotional/social experience of tonal variation patterns in the manner that we humans are capable of.... The approach to early language learning I propose for Novamente is a subtle combination of humanlike and nonhumanlike aspects.

More speculatively, there may be a cognitive flow-through from "tonal pattern recognition" to the way we partition up the overall stream of perceived/enacted data into events -- the latter is a hard cognitive/perceptual problem, which is guided by language, and may also on a lower level be guided by subtle tonal/musical communicative/introspective intuitions. (Again, from an AI perspective, this is justification in favor of a nonhumanlike route ... one of the subtler aspects of high-level AI design, I have found, is knowing how to combine human-neurocognition inspiration with computer-science inspiration... but that is a topic for another blog post some other day...)

I am also reminded of the phenomenon of the mantra -- which is a pattern of tonal variation that is found to have some particular psychospiritual effect on humans. I have never liked mantras much personally, being more driven to the spare purity of Zen meditation (in those rare moments these days when emptying the intellectual/emotional mind and seeking altered states of purer awareness seems the thing to do...); but in the context of these other ideas on music, tones and psychology, I can see that if we have built-in brain-wiring for responding to tonal variation patterns, mantras may lock into that wiring in an interesting way.

I won't try to describe for you the surreal flourish of brass-instrument sounds that I hear in my mind at this moment -- a celebratory "harmony of dissonance" tune/anti-tune apropos of the completion of this blog post, and the resumption of the software-code-debugging I was involved with before I decided to distract myself briefly via blogging...

Friday, November 10, 2006

Virtual Brilliance, Virtual Idiocy

Last night, at the offices of the Electric Sheep Company (a company devoted to creating "virtual Real Estate" in multi-participant online simulation worlds such as Second LIfe), I saw Sibley Verbeck give a lovely presentation on the state of the art in these proto-Metaverse technologies.

These days, more than 10K people are online in Second Life at any given moment, it seems. A million subscribers, half of them active. People are talking about the potential for using Second Life for business presentations, as a kind of super-pumped-up 3D avatar-infused WebEx. And of course the possibility for other cool apps not yet dreamed of.

Stirring stuff ... definitely, technology worth paying attention to.

And yet, Sibley's excellent presentation left me wondering the following: Do we really want to perpetuate all the most stupid and irritating features of human society in the metaverse ... such as obsession with fashion and hairstyles!!??

"Virtual MTV Laguna Beach", a non-Second-Life project that Electric Sheep Factory did, is technically impressive yet morally and aesthetically YUCK, from a Ben Goertzel perspective. Virtual So-Cal high school as a post-Singularity metaverse is a kind of transhumanist nightmare.

I remain unclear regarding whether there will really be any **interesting** "killer apps" for metaverse technology (and I don't find gaming or online dating all that interesting ;) before really powerful multisensory VR interfaces come about.

And even then, simulating humanity in virtuo fascinates me far less than going beyond the human body and its restrictions altogether.

But, I do note that we are currently using a 3D sim world to teach our Novamente baby AI system. Once it becomes smarter, perhaps we will release our AI in Second Life and let it learn from the humans there ... about important stuff like how to wear its hair right (grin!)

And I must admit to being excited about the potential of this sort of tech for scientific visualization. Flying your avatar through the folds of a virtual human brain, or a virtual cell full of virtual DNA, would be mighty educational. Not **fundamental** in the sense of strong AI or molecular assemblers or fully immersive VR, but a lot niftier than Virtual Laguna Beach....

-- Ben

Thursday, November 02, 2006

Music as a Force of Nature...

This is just a quick follow-up to the prior post on "Being a Force of Nature" ...

Thinking over the issues I wrote about in that post, I was reminded of a failed attempt I made many years ago to construct a more robust kind of music theory than the ones that currently exist....

(Ray Jackendoff's generative-grammar-based theory of music is a nice attempt in a similar direction to what I was trying to do, but ultimately I think he failed also....)

Existing music theory seems not to address the most important and interesting questions about music: Which melodies and rhythms are the most evocative to humans, in which ways, and why?

To put it crudely, we know how to distinguish (with fairly high accuracy) a horrible melody from an OK-or-better melody based on automated means. And we know how to distinguish (with fairly high accuracy) what sorts of emotions an OK-or-better melody is reasonably likely to evoke, by automated means.

But, we have NO handle whatsoever, scientifically or analytically, on what distinguishes a GREAT melody (or rhythm, though I've thought most about melodies) from a mediocre one.

I spent a fair bit of time looking for patterns of this nature, mostly eyeballing various representations of melodies but also using some automated software scripts. No luck ... and I long ago got to busy to keep thinking about the issue....

What was wrong with this pursuit was, roughly speaking, the same thing that's wrong with thinking about human minds as individual, separate, non-social/cultural entities....

A musical melody is a sequence of notes arranged in time, sure ... but basically it's better thought of as a kind of SOFTWARE PROGRAM intended to be executed within the human acoustic/cognitive/emotional brain.

So, analyzing melodies in terms of their note-sequences and time-delays is sort of like analyzing complex software programs in terms of their patterns of bits. (No, it's not an exact analogy by any means, but you may get the point.... The main weaknesses of the analogy are: notes and delays are higher-level than bits; and, musical melodies are control-sequences for a complex adaptive system, rather than a simpler, more deterministic system like a von Neumann computer.)

In principle one could find note/delay-level patterns to explain what distinguishes good from great music, but one would need a HUGE corpus of examples, and then the patterns would seem verrrry complex and tricky on that level.

A correct, useful music theory would need to combine the language of notes and delays and such with the language of emotional and cognitive responses. The kind of question involved is: in a given emotional/cognitive context, which specific note/delay patterns/combinations provide which kinds of shifts to the emotional/cognitive context.

However, we currently lack a good language for describing emotional/cognitive contexts.... Which makes the development of this kind of music theory pretty difficult.

So in what sense is music a force of nature? A piece of music comes out of the cultural/psychological/emotional transpersonal matrix, and has meaning and pattern mainly in combination with this matrix, as a sequence of control instructions for the human brains that form components of this matrix...

(I am reminded of Philip K. Dick's novel VALIS, in which a composer creates music that is specifically designed to act on human brains in a certain way, designed to bring them to certain spiritual realizations. Before ever reading Dick, in my late teens, I had a fantasy of composing a musical melody that was so wonderfully recursively revelatory -- in some kind of Escher-meets-Jimi-Hendrix-and-Bach sort of way -- that it would wake up the listener's mind to understand the true nature of the universe. Alas, I've been fiddling at the piano keyboard for years, and haven't come up with it yet....)

Anyway, this is far from the most important thing I could be thinking about! Compared to artificial general intelligence, music is not so deep and fascinating ... ultimately it's mostly a way of fiddling with the particularities of our human mental system, which is not so gripping as the possibility of going beyond these particularities in the right sort of way....

But yet, in spite of its relative cosmic unimportance, I can't really stay away from music for too long! The KORG keyboard sitting behind me tempts ... and many of my best ideas have come to me in the absence/presence that fills my mind while I'm improvising in those quasi-Middle-Eastern scales that I find so seductive (and my daughter, Scheherazade, says she's so sick of hearing, in spite of her Middle-Eastern name ;-)

OK... back to work! ...

Tuesday, October 31, 2006

On Being a Force of Nature...

Reading the book

Presence: An Exploration of Profound Change in People, Organizations, and Society
by Peter M. Senge, C. Otto Scharmer, Joseph Jaworski, and Betty Sue Flowers

led me inevitably to thoughts about the useful (but sometimes counterproductive) illusions of self and free will.

The authors argue that one path to achieving great things and great happiness is to let go of the illusion of autonomy and individual will, and in the words of George Bernard Shaw "be a force of nature," allowing oneself to serve as a tool of the universe, of larger forces that exist all around and within oneself, and ultimately are a critical part of one's own self-definition (whether one always realizes this or not).

The Shaw quote says:

"
This is the true joy in life, the being used for a purpose you consider a mighty one, the being a force of nature, rather than a feverish, selfish clod of ailments and grievances complaining that the world will not devote itself to making you happy.
"

A related quote from Martin Buber says of the "truly free" man, that he:

"
... intervenes no more, but at the same time, he does not let things merely happen. He listens to what is emerging from himself, to the course of being in the world; not in order to be supported by it, but in order to bring it to reality as it desires.
"

There is an interesting dilemma at the heart of this kind of wisdom, which is what I want to write about today.

A part of me rebels strongly against all this rhetoric about avoiding individual will and being a force of nature. After all, nature sucks in many ways -- nature "wants" me and my wife and kids and all the rest of you humans to die. What the natural and cultural world around me desires is in large measure repellent to me. I don't want to "get a haircut and get a real job" just because that's what the near-consensus of the world around me is ... and nor do I want to submit to death and disease. Nor do I want to listen to everything that nature has put inside me: anger, irrationality and the whole lot of it.... Nature has given me some great gifts and some nasty stuff as well.

Many of the things that are important to me are -- at least at first glance -- all about me exercising my individual will against what nature and society want me to do. Working to end the plague of involuntary death. Working to create superhuman minds. Composing music in scales few enjoy listening to; writing stories with narrative structures so peculiar only the really open-minded can appreciate them. Not devoting my life entirely or even primarily to the pursuits of money, TV-viewing, and propagating my genome.

On the other hand, it's worth reflecting on the extent to which the isolation and independence of the individual self is an illusion. We humans are not nearly so independent as modern Western -- and especially American -- culture (explicitly and implicitly) tells us. In fact the whole notion of a mind localized in a single body is not quite correct. As my dear friend Meg Heath incessantly points out, each human mind is an emergent system that involves an individual body, yes, but also a collection of tools beyond the body, and a collection of patterns of interaction and understanding within a whole bunch of minds. In practice, I am not just who I am inside my brain, I am also what I am inside the brains of those who habitually interact with me. I am not just what I do with my hands but also what I do with my computer. I wouldn't be me without my kids, nor without the corpus of mathematical and scientific knowledge and philosophical ideation that I have spent a large bulk of my life absorbing and contributing to.

So, bold and independent individual willfulness is, to an extent, an illusion. Even when we feel that we're acting independently, from the isolation of our own heart and mind, we are actually enacting distributed cultural and natural processes. A nice illustration of this is the frequency with which scientific discoveries -- even revolutionary ones -- are made simultaneously by multiple individuals. Charles Darwin and Alfred Russell Wallace were being willful, independent, deviant thinkers -- yet each of them was also serving as a nodal point for a constellation of forces existing outside himself ... a constellation of forces that was almost inevitably moving toward a certain conclusion, which had to be manifested through someone and happened to be manifested through those two men.

An analogy appears to exist with the representation of knowledge in the human brain. There is a peculiar harmony of localization and distribution in the way the brain represents knowledge. There are, in many cases, individual and highly localized brain regions corresponding to particular bits of knowledge. If you remove that little piece of the brain, the knowledge may go away (though in many but not all cases, it may later be regenerated somewhere else). But yet, that doesn't mean the knowledge is immanent only in that small region. Rather, when the knowledge is accessed or utilized or modified, a wide variety of brain regions may be activated. The localized region serves as a sort of "trigger" mechanism for unlocking a large-scale activation pattern across many parts of the brain. So, the knowledge is both localized and distributed: there are globally distributed patterns that are built so as to often be activated by specific local triggers.

We can look at humans as analogous to neurons, in the above picture. None of us contains that much in and of ourselves, but any one of us may be more or less critical in triggering large-scale activation patterns ... which in turn affect a variety of other individuals in a variety of ways....

So then, the trick in being a "force of nature" is to view yourself NOT as an individual entity with an individual mind contained in an individual body, making individual decisions ... but rather, as a potential trigger for global activity patterns; or, to put it slightly differently, as a node or nexus of a whole bunch of complex global activity patterns, with the capability to influence as well as be influenced.

When we act -- when we feel like "we" are "acting" -- it is just as fair to say that the larger (social, cultural, natural, etc.) matrix of patterns that defines us is acting thru the medium of us.

I feel analytically that what I said in the previous paragraph is true... but what is interesting is how rarely I actually feel that way, in practice, in the course of going about my daily business. Even in cases where it is very obviously the truth -- such as my work on artificial general intelligence. Yes, I have willfully chosen to do this, instead of something else easier or more profitable or more agreeable to others. On the other hand, clearly I am just serving as the tool of a larger constellation of forces -- the movement of science and technology toward AGI has been going on a long time, which is why I have at my disposal the tools to work on AGI; and a general cultural/scientific trend toward legitimization of AGI is beginning, which is why I have been able to recruit others to work on AGI with me, which has been an important ingredient for maintaining my own passion for AGI at such a high level.

How different would it be, I wonder, if in my individual daily (hourly, minutely, secondly) psychology, I much more frequently viewed myself as a node and a trigger rather than an individual. A highly specialized and directed node and trigger, of course -- not one that averages the inputs around me, but one that is highly selective and responds in a very particular way intended to cause particular classes of effects which (among other things) will come back and affect me in specific ways.

In short: Letting go of the illusion of individuality, while retaining the delights of nonconformity.

Easy enough to say and think about; and rather tricky to put into practice on a real-time basis.

Cultures seem to push you either to over-individualism or over-conformity, and finding the middle path as usual is difficult -- and as often, is not really a middle path, in the end, but some sort of "dialectical synthesis" leading beyond the opposition altogether and into a different way of being and becoming....

Sunday, September 10, 2006

Friendliness vs. Compassion, revisited (plus a bunch of babbling about what I've been up to this year)

Wow, it's been a long time since I've blogged on here -- apparently I haven't been in a bloggy mood.

It's been a busy year ... I've sent my oldest son Zarathustra off to college at age 16 (to Simon's Rock College, www.simons-rock.edu, the same place I, my sister and my ex-wife went way back in the day), which is a very odd feeling ... I finished a pretty decent draft of a novel, Echoes of the Great Farewell, which is a completely lunatic prose-poetic novel-thing told from the stream-of-consciousness point of view of a madman who believes that hallucinogenic mushrooms have told him how to create a superhuman AI (perhaps I'll actually try to get this one published, though it's not a terribly publisher-friendly beast) ... I came up with a substantial simplification of the Novamente AI design, which I'm pretty happy with due to its deep foundations in systems philosophy ... worked with my Novamente colleagues to take a few more incremental steps toward implementation of the Novamente AGI design (especial progress in the area of probabilistic reasoning, thanks to the excellent efforts of Ari Heljakka) ... did some really nice data mining work in the context of some commercial projects ... make some freaky instrumental music recordings that my wife at least enjoyed ... hiked the Na Pali Trail on Kaui and a whole bunch of trails near the Matterhorn in the Alps with my mountain-maniacal young wife Izabela ... co-organized a conference (the AGIRI workshop) ... published a philosophy book, The Hidden Pattern, which tied together a whole bunch of recent essays into a pretty coherent statement of the "world as pattern" perspective that has motivated much of my thinking ... developed a new approach to AGI developmental psychology (together with Stephan Vladimir Bugaj) ... starred in a few animations created by my son Zebulon (zebradillo.com), including one about rogue AI and another in which I mercilessly murder a lot of dogs ... helped discover what seems to be the first plausible genetic underpinnings for Chronic Fatigue Syndrome (together with colleagues at the CDC and Biomind LLC) ... and jeez, well this list is dragging on, but it's really not the half of it...

A pretty full year -- fun to live; too much going on to permit much blogging ... but frustrating in the big picture, given that it's been yet another year in which only modest incremental progress has been made toward my most important goal of creating AGI. My understanding of AGI and the universe has increased significantly this year so far, which is important. And the Novamente codebase has advanced too. Again, though, balancing the goal of achieving AGI with the goal of earning cash to support a family (send the kids to college, pay the alimony (which runs out in another 9 months -- yay!!), etc.) proves a tough nut to crack, and is just a dilemma I keep living with, without solving it satisfactorily so far.... I'll be spending much of the next 6 weeks trying to solve it again, by doing a bunch of meetings and social-networking events partially aimed at eventually putting me in touch with investors or other partners who may be interested in funding my AGI work more fully than is currently the case. (Don't get me wrong, we are moving toward AGI in the Novamente project right now, but we could be moving 10 times faster with some fairly modest investment ... the small amount of investment we've gotten so far, combined with the small surplus value my colleauges and I have managed to extract from our commercial narrow-AI contracts, is far from enough to move us along at maximum rate.)

BUT ANYWAY ... all this was not the point of this blog entry. Actually, the point was to give a link to an essay I wrote on a train from Genova to Zermatt, following a very interesting chat with Shane Legg and Izabela. Shane wrote a blog entry after our conversation, which can be found by going to his site

http://www.vetta.org/

and searching for the entry titled "Friendly AI is Bunk." I wrote an essay with a similar theme but a slightly different set of arguments. It is found at

http://www.goertzel.org/papers/LimitationsOnFriendliness.pdf

The essay is informal in the sense of a blog entry, but is too long to be a blog entry. My argument is a bit more positive than Shane's in that, although I agree with him that guaranteeing "AI Friendliness" in a Yudkowskian sense is very unlikely, I think there may be more general and abstract properties ("compassion" (properly defined, and I'm not sure how), anyone?) that can be more successfully built into a self-modifying AI.... (Shane by the way is a deep AI thinker who is now a PhD student working with Marcus Hutter on the theory of infinitely powerful AI's, and who prior to that did a bunch of things including working with me on the Webmind AI system in the late 1990's, and working with Peter Voss on the A2I2 AGI architecture.)
While you're paying attention, you may be interested in another idea I've been working on lately, which is a variant of the Lojban language (tentatively called Lojban++) that I think may be very useful for communication between humans and early-stage AGI's. If you're curious you can read about it at

http://www.goertzel.org/papers/lojbanplusplus.pdf

With a view toward making Lojban++ into something really usable, I've been spending a bit of time studying Lojban lately, which is a slow but fascinating and rewarding process that I encourage others to undertake as well (see www.lojban.org).

Well, OK ... that's enough for now ... time for bed. (I often like late-night as a time for work due to the quiet and lack of interruptions' but tonight my daughter is having a friend sleep over and they're having an extremely raucous post-midnight mop-the-dirty-kitchen-floor/mock-ice-skating party which is more conducive to blogging than serious work ;-). I hope to blog a bit more often in the next months; for whatever obscure human-psychology reason it seems to gratify some aspects of my psyche. Hopefully the rest of 2006 will be just as fun and diverse as the part so far -- and even more productive for Novamente AGI...

Wednesday, January 25, 2006

Inconsistentism

This blog entry arises from an email I sent to the SL4 email list, in response to a suggestion by Marc Geddes that perhaps the universe can best be considered as a logically inconsistent formal system.

I find that Marc's suggestion ties in interestingly with a prior subject I've dealt with in this blog: Subjective Reality.

I think it is probably not the best approach to think about the universe as a formal system. I find it more useful to consider formal systems as approximate and partial models of the universe.

So, in my view, the universe is neither consistent nor inconsistent, any more than a brick is either consistent or inconsistent. There may be mutually consistent or mutually inconsistent models of the universe, or of a brick.

The question Marc has raised, in this perspective, is whether the "best" (in some useful sense) way of understanding the universe involves constructing multiple mutually logically inconsistent models of the universe.

An alternative philosophical perspective is that, though the universe is not in itself a formal system, the "best" way of understanding it involves constructing more and more comprehensive and sophisticated consistent formal systems, each one capturing more aspects of the universe than the previous. This is fairly close to being a rephrasing of Charles S. Peirce's philosophy of science.

It seems nice to refer to these two perspectives as Inconsistent versus Consistentist views of the universe. (Being clear however that the inconsistency and consistency refer to models of the universe rather than the universe itself.)

Potentially the Inconsistentist perspective ties in with a previous thread in this blog regarding the notion of Subjective Reality. It could be that, properly formalized, the two models

A) The universe is fundamentally subjective, and the apparently objective world is constructed out of a mind's experience

B) The universe is fundamentally objective and physical, and the apparently subjective world is constructed out of physical structures and dynamics

could be viewed as two

  • individually logically consistent
  • mutually logically inconsistent
  • separately useful
models of the universe. If so, this would be a concrete argument in favor of the Inconsistentist philosophical perspective.

Inconsistentism also seems to tie in with G. Spencer Brown's notion of modeling the universe using "imaginary logic", in which contradiction is treated as an extra truth value similar in status to true and false. Francisco Varela and Louis Kauffmann extended Brown's approach to include two different imaginary truth values I and J, basically corresponding to the series

I = True, False, True, False,...

J = False, True, False, True,...

which are two "solutions" to the paradox

X = Not(X)

obtained by introducing the notion of time and rewriting the paradox as

X[t+1] = Not (X[t])

In Brownian philosophy, the universe may be viewed in two ways

  • timeless and inconsistent
  • time-ful and consistent
Tying this in with the subjective/objective distinction, we obtain the interesting idea that time emerges from the feedback between subjective and objective. That is, one may look at a paradox such as

creates(subjective reality, objective reality)
creates(objective reality, subjective reality)
creates(X,Y) --> ~ creates(Y,X)

and then a resolution such as

I = subjective, objective, subjective, objective,...
J = objective, subjective, objective, subjective,...

embodying the iteration

creates(subjective reality[t], objective reality[t+1])
creates(objective reality[t+1], subjective reality[t+2)


If this describes the universe then it would follow that the subjective/objective distinction only introduces contradiction if one ignores the existence of time.

Arguing in favor of this kind of iteration, however, is a very deep matter that I don't have time to undertake at the moment!

I have said above that it's better to think of formal systems as modeling the universe rather than as being the universe. On the other hand, taking the "patternist philosophy" I've proposed in my various cognitive science books, we may view the universe as a kind of formal system comprised of a set of propositions about patterns.

A formal system consists of a set of axioms.... OTOH, in my "pattern theory" a process F is a pattern in G if
  • F produces G
  • F is simpler than G
So I suppose you could interpret each evaluation "F is a pattern in G"as an axiom stating"F produces G and F is simpler than G"

In this sense, any set of patterns may be considered as a formal system.

I would argue that, for any consistent simplicity-evaluation-measure, the universal pattern set is a consistent formal system; but of course inconsistent simplicity-evaluation-measures will lead to inconsistent formal systems.

Whether it is useful to think about the whole universe as a formal system in this sense, I have no idea...

Thursday, December 08, 2005

A General Theory of the Development of Forms (wouldn't it be nice to have one?)

This blog entry briefly describes a long-term conceptual research project I have in mind, and have been thinking about for a while, which is to try to figure out some sort of "general theory of the development of forms/patterns in growing complex systems."

Since the Novamente AGI high-level design and the "patternist philosophy of mind" are basically completed and stable for a while (though I'm still engaged with writing them up), I need a new conceptual obsession to absorb the extremely-abstract-thinking portion of my brain... ;-)

Thinking about the development of forms, I have in mind three main specific areas:

  • developmental psychology (in humans and AI's)
  • epigenesis in biological systems
  • the growth of the early universe: the emergence of physical law from lawlessness, etc. (cf John Wheeler)

Each of these is a big area and I've decided to proceed through them in this order. Maybe I will never get to the physics part and will just try to abstract a general theory of development from the first two cases, we'll see.

I also have an intuition that it may be useful to use formal language theory of some sort as a conceptual tool for expressing developmental stages and patterns. Piaget tried to use abstract algebra in some of his writings, which was a nice idea, but didn't quite work. This ties in with Jerry Fodor's notion of a "language of thought", which I don't buy quite in all the senses he means it, but may have some real meat to it. It may be that developing minds at different stages. I don't know if anyone has taken this approach in the developmental psych literature.

For instance, it's arguable that quantifier binding is only added to the human language of thought at Piaget's formal stage, and that recursion is only added to the human language of thought at Piaget's concrete operational stage (which comes along with phrase structure syntax as opposed to simpler proto-language). What I mean by "X is added to the human language of thought at stage S" is something like "X can be used with reasonable generality and fluidity at stage S" -- of course many particular instances of recursion are used before the pre-operational phase, and many particular instances of quantifier binding are used before the formal phase. But the full "syntax"of these operations is not mastered prior to the stages I mentioned, I suggest. (Note that I am using Piaget's stage-labels only for convenience, I don't intend to use them in my own theory of forms; if I take a stage-based approach at all then I will define my own stages.)

I note that formal language theory is something that spans different domain areas in the sense that

  • there's discussion of "language of thought" in a general sense
  • natural language acquisition is a key aspect of developmental psych
  • L-system theory shows that formal languages are useful for explaining and modeling plant growth
  • "Symbolic dynamics" uses formal language theory to study the dynamics of chaotic dynamical systems in any domain, see also Crutchfield and Young

So it seems to be a potentially appropriate formal tool for such a project.

I was discussing this with my friend Stephan Bugaj recently and he and I may write a book on this theme if we can pull our thinking together into a sufficiently organized form....

Friday, December 02, 2005

More Venting about Scientific Narrowmindedness and Superintelligent Guinea Pigs

I spent the day giving a talk about bioinformatics to some smart medical researchers and then meeting with them discussing their research and how advanced narrow-AI informatics tools could be applied to help out with it.

AAARRRGGHHH!!! Amazing how difficult it is to get even clever, motivated, knowledgeable biologists to understand math/CS methods. The techniques I presented to them (a bunch of Biomind stuff) would genuinely help with their research, and are already implemented in stable software -- there's nothing too fanciful here. But the "understanding" barrier is really hard to break through -- and I'm not that bad at explaining things; in fact I've often been told I'm really good at it....

We'll publish a bunch of bioinformatics papers during the next year and eventually, in a few more years, the techniques we're using (analyzing microarray and SNP and clinical data via learning ensembles of classification rules; then data mining these rule ensembles, and clustering genes together based on whether they tend to occur in the same high-accuracy classification rules, etc.) will become accepted by 1% or 5% of biomedical researchers, I suppose. And in 10 years probably it will all be considered commonplace: no one will imagine analyzing genetics data without using such techniques....

Whether Biomind will manage to get rich during this process is a whole other story -- it's well-known that the innovative companies at the early stage of a revolution often lose out financially to companies that enter the game later once all the important ideas have already been developed. But finances aside, I'm confident that eventually, little by little, the approach I'm taking to genetic data analysis will pervade and transform the field, even if the effect is subtle and broad enough that I don't get that much credit for it....

And yet, though this Biomind stuff is complex enough to baffle most bioinformaticists and to be really tough to sell, it's REALLY REALLY SIMPLE compared to the Novamente AI design, which is one or two orders of magnitude subtler. I don't think I'm being egomaniacal when I say that no one else has really appreciated most of the subtlety in the Novamente design -- not even the other members of the Novamente team, many of whom have understood a lot. Which is verrrry different from the situation with Biomind: while the Biomind methods are too deep for most biologists, or most academic journal referees who review our papers, to understand, everyone on the Biomind team fully "gets" the algorithms and ideas.

Whether the subtlety of the Novamente design ever gets to be manifested in reality remains to be determined -- getting funding to pay a small team to build the Novamente system according to the design remains problematic, and I am open to the possibility that it will never happen, dooming me (as I've joked before) to a sort of Babbagedom. What little funding there is for AGI-ish research tends to go to folks who are better at marketing than I am, and who are willing to tell investors the story that there's some kind of simple path to AGI. Well, I don't think there is a simple path. There's at least one complex path (Novamente) and probably many other complex paths as well; and eventually someone will follow one of them if we don't annihilate ourselves first. AGI is very possible with 3-8 years effort by a small, dedicated, brilliant software team following a good design (like Novamente), but if the world can't even understand relatively simple stuff like Biomind, getting any understanding for something like Novamente is obviously going to continue to be a real uphill battle!

Relatedly, a couple weeks ago I had some long conversations with some potential investors in Novamente. But the investors ended up not making any serious investment offer -- for a variety of reasons, but I think one of them was that the Novamente design was too complex for them to easily grok. If I'd been able to offer them some easily comprehensible apparent path to AGI, I bet they would have invested. Just like it would be easier to sell Biomind to biologists if they could grok the algorithms as well as the Biomind technical team. Urrrghh!

Urrrgghhh!! urrrgghh!! ... Well, I'll keep pushing. There are plenty of investors out there. And the insights keep coming: interestingly, in the last few days a lot of beautiful parallels have emerged between some of our commercial narrow-AI work in computational linguistics and our more fundamental work in AGI (relating to making Novamente learn simple things in the AGI-SIM simulation world). It turns out that there are nice mathematical and conceptual parallels between algorithms for learning semantic rules from corpuses of texts, and the process of learning the functions of physical objects in the world. These parallels tell us a lot about how language learning works -- specifically, about how structures for manipulating language may emerge developmentally from structures for manipulating images of physical objects. This is exactly the sort of thing I want to be thinking about right now: now that the Novamente design is solid (though many details remain to be worked out, these are best worked out in the course of implementation and testing), I need to be thinking about "AGI developmental psychology," about how the learning process can be optimally tuned and tailored. But instead, to pay the bills and send the kids to college yadda yadda yadda, I'm trying to sell vastly simpler algorithms to biologists who don't want to understand why it's not clever to hunt for biomarkers for a complex disease by running an experiment with only 4 Cases and 4 Controls. (Answer: because complex diseases have biomarkers that are combinations of genes or mutations rather than individual genes/mutations, and to learn combinational rules distinguishing one category from another, a larger body of data is needed.)

Ooops! I've been blogging too long, I promised Scheherazade I would go play with her guinea pigs with her. Well, in a way the guinea pigs are a relief after dealing with humans all day ... at least I don't expect them to understand anything. Guinea pigs are really nice. Maybe a superintelligent guinea pig would be the ultimate Friendly AI. I can't remember ever seeing a guinea pig do anything mean, though occasionally they can be a bit fearful and defensive....

Tuesday, November 29, 2005

Post-Interesting

Hi all,

I have launched a second blog, which is called Post-Interesting

www.post-interesting.com

and I have invited a number of my friends to join me in posting to it (we'll see if any of them actually get around to it!).

The idea is that this current blog ("Multiverse According to Ben") will contain more personal-experience and personal-opinion type entries, whereas Post-Interesting will be more magazine-like, containing reviews, interesting links, and compact summaries of highly crisp scientific or philosophical ideas.... (Of course, even my idea of "magazine-like" contains a lot of personal opinions!)

Not that I really have time to maintain one blog let alone two, but from time to time I seem to be overtaken by an irresistable desire to expunge massive amounts of verbiage ;-D

If people make a lot of interesting posts to Post-Interesting then one day it will be a multimedia magazine and put Wired and Cosmopolitan out of business! (For now I just put three moderately interesting initial posts there....)

-- Ben

Wednesday, November 16, 2005

Reality and Religion (a follow-up to earlier posts on Objective/Subjective Reality)

This post is a response to Bob McCue's comments to my earlier blog entry on "Objective and Subjective Reality". Scroll down after going to

http://www.goertzel.org/blog/2005/07/objective-versus-subjective-reality.html

to read his comments.

Bob is a former Mormon and has written extensively and elegantly about his reasons for leaving the faith:

http://mccue.cc/bob/spirituality.htm

He read my blog on objective/subjective reality and my essay on "social/computational/probabilist" philosophy of science

http://www.goertzel.org/dynapsyc/2004/PhilosophyOfScience_v2.htm

and then posed some questions regarding the probabilistic justification of religious beliefs.

Bob: The questions you raise are deep and fascinating ones and unfortunately I don't have time right now to write a reply that does them justice.

However, I can't resist saying a few things ;-)

I was never religious but my ex-wife was and, although this led to numerous unpleasant arguments between us, it also led me to gain some degree of appreciation (OK, not all that much!) for the religious perspective. For her (as a Zen Buddhist) it was never about objective truth at all, it was always about subjective experience -- her own and that of the others in her sangha (religious group). If probability theory was relevant, it was in the context of evaluations like

Probability ( my own spiritual/emotional state is good GIVEN THAT I carry out these religious practices)

>

Probability ( my own spiritual/emotional state is good GIVEN THAT I don't carry out these religious practices)

The evaluation criterion was internal/subjective not external/objective. The actual beliefs of the religion were only evaluated in regard to their subjective effects on the believer's internal well-being. This fits in with a Nietzschean perspective in which "An organism believes what it needs to believe in order to survive", if you replace "survive" with "maximize internal satisfaction" (which ultimately approximately reduces to Nietzsche's "survival" if one takes an evolutionary view in which we have evolved to, on average, be satisfied by things correlated with our genomes' survival).

I am not sure what this has to do with religions like Mormonism though. I think my ex got interested in Zen (in her mid-20's) partly because I had talked to her about it years before that, when as a teenager I had found Huang Po's Zen writings (on exiting the world of thought and ideas and entering the world of pure truth/nothingness) really radical and fascinating. Zen is not very typical of religions and it's questionable whether it really belongs in the "religion" category -- it's a borderline case. It specifically teaches that the external, "objective" world is illusory and urges you to fully, viscerally and spiritually understand this world's construction via the mind. Thus in a Zen perspective the empirical validation or refutation of hypotheses (so critical to science) is not central, because it takes place within a sphere that is a priori considered illusory and deceptive. Because of this Zen tends not to make statements that contradict scientific law; rather it brushes the whole domain of science aside as being descriptive of an illusory reality.

I guess that Mormonism is different in that it makes hypotheses that directly contradict scientific observation (e.g. do Mormons hold the Earth was created 6000 years ago?). But still, I suspect the basic psychological dynamics is not that different. People believe in a religion because this belief helps them fulfill their own goals of personal, social or spiritual satisfaction. Religious people may also (to varying extents) have a goal of recognizing valid patterns in the observed world; but people can have multiple goals, and apparently for religious people the goal of achieving personal/social/spiritual satisfaction thru religion overwhelms the goal of recognizing valid patterns in the observed world. I find nothing very mysterious in this.

Bob: You ask about belief in Kundalini Yoga (another obsession of my ex-wife, as it happens.) I guess that the KY system helps people to improve their own internal states and in that case people may be wise to adopt it, in some cases... even though from a scientific view the beliefs it contains are a tricky mix of sense and nonsense.

However, it seems pretty clear to me that religious beliefs, though they may sometimes optimally serve the individual organism (via leading to various forms of satisfaction), are counterproductive on the species level.

As a scientific optimist and transhumanist I believe that the path to maximum satisfaction for humans as a whole DOES involve science -- both for things like medical care, air conditioning and books and music, and for things like creating AI's to help us and creating nanotech and gene therapy solutions for extending our lives indefinitely.

There's a reason that Buddhism teaches "all existence involves suffering." It's true, of course -- but it was even more true in ancient India than now. There was a lot more starvation and disease and general discomfort in life back then, which is why a suffering-focused religion like Buddhism was able to spread so widely. The "suffering is everywhere" line wouldn't sell so well in modern America or Western Europe, because although suffering still IS everywhere, it's not as extreme and not as major a component of most people's lives. Which is due, essentially, to science. (I am acutely aware that in many parts of the world suffering is a larger part of peoples' lives, but, this does not detract from the point I am making.)

Since religious belief systems detract from accurate observation of patterns in reality, they detract from science and thus from the path with the apparently maximal capacity to lead humanity toward overall satisfaction, even though they may in fact deliver maximal personal satisfaction to some people (depending on their personal psychology).

However, one may argue that some people will never be able to contribute to science anyway (due to low intelligence or other factors), so that if they hold religious beliefs and don't use them to influence the minds of science-and-technology-useful people, their beliefs are doing no harm to others but may be increasing their own satisfaction. Thus, for some people to be religious may be a good thing in terms of maximizing the average current and long term satisfaction of humanity.

There is also a risk issue here. Since religion detracts from science and technology, it maintains humans in a state where they are unlikely to annihilate the whole species, though they may kill each other in more modest numbers. Science gives us more power for positive transformation and also more power for terrible destruction. The maximum satisfaction achievable thru science is higher than thru religion (due to the potential of science to lead to various forms of massively positive transhumanism), but the odds of destruction are higher too. And we really have no way of knowing what the EXPECTED outcome of the sci-tech path is -- the probabilities of transcension versus destruction.

[As I wrote the prior paragraph I realized that no Zen practitioner would agree with me that science has the power to lead to greater satisfaction than religion. Semantics of "satisfaction" aside they would argue that "enlightenment" is the greatest quest and requires no technology anyway. But even if you buy this (which I don't, fully: I think Zen enlightenment is an interesting state of mind but with plusses and minuses compared to other ones, and I suspect that the transhuman future will contain other states of mind that are even more deep and fascinating), it seems to be the case that only a tiny fraction of humans have achieved or ever will achieve this exalted state. Transhumanist technology would seem to hold the possibility of letting any sentient being choose their own state of mind freely, subject only to constraints regarding minimizing harm to others. We can all be enlightened after the Singularity -- if we want to be! -- but we may well find more appealing ways to spend our eternity of time!! -- ]

OK, I drifted a fair way from Mormonism there, back to my usual obsessions these days. But hopefully it was a moderately interesting trajectory.

For a more interesting discussion of Mormonism, check out the South Park episode "All About Mormons." It was actually quite educational for me.

Saturday, October 22, 2005

Quantum Erasers, Psychokinesis and Time Travel

This post is inspired by a study of the “delayed choice quantum eraser” experiment described e.g. at

http://www.bottomlayer.com/bottom/kim-scully/kim-scully-web.htm

http://www.dhushara.com/book/quantcos/qnonloc/eraser.htm

Even though the quantum eraser experiments don’t allow true “backwards causation,” this doesn’t prove that such a thing is impossible. It just proves that there is no way to do it within the commonly accepted constraints of physical law. There is at least once concrete possibility for how currently known physical law may be breakable, in a way that would allow backward causation (and, as an effective consequence, time travel – since being able to cause events in the past would mean being able to create an exact replica of oneself in the past, including a brain-state possessing the feeling of having just been quantum-magically transported into the past).

This possibility is “quantum psychokinesis” – a notion which sounds bizarre, but is apparently supported by a variety of experiments done by respected scientists at various institutions including Princeton University; see

http://www.fourmilab.ch/rpkp/strange.html

The simplest of these experiments involve people trying to influence, by the power of concentration, random events such as the direction of an electron’s spin. A long list of experiments show that, after some training, people have a weak but real ability to do this. Over tens of thousands of trials people can make electrons spin in the direction they want to 51% of the time or so, whereas chance would dictate merely 50%. This is a small difference but over so many trials is highly statistically significant.

Hooking this kind of PK experiment up to a quantum eraser apparatus, one would obtain a practical example of reverse causation. If this kind of PK actually works, then in the context of the above “paradox” situation, for example, it really would be possible for someone on Alpha Centauri to send messages faster than light to someone back home, via biasing the direction of spin of the coupled twin particle observed on Alpha Centauri. The rate of information transmission would be extremely low, since all that PK has ever been observed to do is give a slight statistical bias to events otherwise thought random. But with an appropriate code even a very slow rate of information transmission can be made to do a lot. And hypothetically, if this sort of PK phenomenon is actually real, one has to imagine that AI’s in the future will find ways to amplify it far beyond what the human brain can do.

Quantum Theory and Consciousness

Another highly nerdy and technical blog entry…

I've been working on the last couple chapters of my long-due philosophy-of-mind book "The Hidden Pattern", and one of the chapters is on quantum reality, so I've been re-studying some of the trickier aspects of quantum theory and its interpretation.

In the course of this, I've come to what I think is a clearer understanding of the relation between quantum theory and consciousness, based on the "decoherence" approach to quantum measurement -- see

http://en.wikipedia.org/wiki/Quantum_decoherence

for a refresher on this topic.

This blog entry will make the most sense to readers who are at least a little familiar with quantum theory, at least at the popular-science level.

Unlike what Eugene Wigner suggested back in the 1960’s, we can’t quite say consciousness is the collapse of the wave function” because in the decoherence approach the wave function does not collapse – there are merely some systems that are almost-classical in the sense that there is minimal interference between the different parts of their wave function.

Of course, we can always say “everything is conscious” but this doesn’t really solve anything – even if everything is conscious, some things are more conscious than others and the problem of consciousness then is pushed into defining what it means for one thing to have a higher degree of consciousness than another.

The analogue of “consciousness is the collapse of the wave function” in the decoherence approach would seem to be “consciousness is the process of decoherence.” I propose that this is actually correct in a fairly strong sense, although not for an entirely obvious reason.

Firstly, I suggest that we view consciousness as “the process of observing.” Now, “observation,” of course, is a psychological and subjective concept, but it also has a physical correlate. I suggest the following characterization of the physical substrate of observation: Subjective acts of observation physically correspond to events involving the registration of something in a memory from which that thing can later be retrieved.

It immediately follows from this that observation necessarily requires an effectively-classical system that involves decoherence.

But what is not so obvious is that all decoherence involves an act of observation, in the above sense. This is because, as soon as a process decoheres, the record of this process becomes immanent in the perturbations of various particles all around it – so that, in principle, one could deconstruct the process from all this data, even though this may be totally impractical to do. Therefore every event of decoherence counts as an observation, since it counts as a registration of a memory that can (in principle) be retrieved.

Most events of decoherence correspond to registration in the memory of some fairly wide and not easily delineated subset of the universe. On the other hand, some events of decoherence are probabilistically concentrated in one small subset of the universe – for example, in the memory of some intelligent system. When a human brain observes a picture, the exact record of the picture cannot be reconstructed solely from the information in that brain – but a decent approximation can be. We may say that an event of registration is approximately localized in some system if the information required to reconstruct the event in an approximate way is contained in that system. In this sense we may say that many events of consciousness are approximately localized in particular systems (e.g. brains), though in an exact sense they are all spread more widely throughout the universe.

So, just as the Copenhagen-interpretation notion of “wave function collapse” turns out to be a crude approximation of reality, so does the notion of “wave function collapse as
consciousness.” But just as decoherence conceptually approximates wave function collapse, so the notion of “decoherence as registration of events in memory as consciousness” conceptually approximates “wave function collapse as consciousness.”

How is this insight reflected in the language of patterns (the theme of my philosophy book – “everything is pattern”)? If a system registers a memory of some event, then in many cases the memory within this system is a pattern in that event, because the system provides data that allows one to reconstruct that event. But the extent to which a pattern is present depends on a number of factors: how simple is the representation within the system, how difficult is the retrieval process, and how approximate is the retrieved entity as compared to the original entity. What we can say is that, according to this definition, the recognition of a pattern is always an act of consciousness. From a physics point of view, though, not all acts of consciousness need to correspond to recognitions of patterns. On the other hand, if one takes a philosophical perspective in which pattern is primary (the universe consists of patterns) then it makes sense to define pattern-recognition is identical to consciousness (???)

Of course, none of this forms a solution to the "hard problem of consciousness," which may be phrased as something like "how does the feeling of conscious experience connect with physical structures and dynamics?" This is philosophically subtler issue and you'll have to wait for "The Hidden Pattern" to read my views on it these days (which are different from anything I've published before). But an understanding of the physical correlates of consciousness is a worthwhile thing in itself, as well as a prerequisite to an intelligent discussion of the “hard problem.”

What do you think?

Too many stupid professors and bureaucrats...

I posted a blog a while ago whining about the annoyingness of the style of writing and thinking most common in academia today.

This is another one, with a slightly different slant.

At the end of the whining, however, I'll include an actual constructive suggestion for how to make some aspects of the academic world better. (Not that I expect my suggestion to have any actual impact!)

As I mentioned before, I've been making a push to submit papers and books for publication recently; something I haven't done much of since leaving academia in the late 90's. It's been quite an experience!

At first I thought I was doing something badly wrong. I have had some publications accepted but my rejection rate has been higher than I thought -- and not because what I'm submitting is bad (really!), mostly just (egads! can you believe it!) because it's unorthodox.

Of course, I'm revising and resubmitting and everything will be published in time. But the process has been educational as well as frustrating. And I've become aware that others whose work is even less radical than mine have been having an even more annoying time with this sort of thing.

I recently got two emails from friends reporting similar experiences to my own.

One is a biologist who recently left a major university for industry and has worked out a truly radical technique for repairing some types of DNA damage. This technique has now been demonstrated in live cells as well as in the test tube. Amazing stuff, with potential to cure some degenerative diseases as well as to slow human aging.

His paper? Rejected without review six times so far. WITHOUT REVIEW each time !!!

Another is an MD who has found particular patterns of DNA mutations that correspond to a couple very well known diseases. But -- oops -- these patterns are more complex than the ones biologists are used to looking at, and they occur in parts of the genome that biologists don't normally like to look at. So, no matter how statistically significant the results, he's got an uphill battle to fight. He's fighting against convention and presupposition. The result: right after he gets some breakthrough results, his government grant funding is cut off.

As compared to in the late 80's and early 90's, it seems much more common now to have things rejected without review. At least, this seems to be happening to me moderately often lately (though not a majority of the time), whereas back then I don't remember it ever happening.

A draft of my book on the Novamente design for general intelligence (not fully polished -- that's still in progress) was rejected by a publisher recently -- the rejection didn't surprise me, but the nature of the rejection did. The book wasn't even sent to a reviewer -- instead the editor just sent back a letter saying that their book series was intended for "serious academic works."

I had a bit of an email conversation with the editor, which revealed that he had shown the book to a "very distinguished AI professor" who had commented that due to the broad scope of the book and its claims to address general intelligence, it couldn't be a very serious academic work. Heh. Well, my ideas might be WRONG, but they're definitely just as serious as a lot of other books published. And the book doesn't contain a lot of mathematical proofs and only a handful of experimental results, but, it has more of both than Minsky's Society of Mind -- which also addresses general intelligence (or tries to) -- but wait, Minsky is old and famous, he's allowed to address big topics.... What we want to avoid is young people addressing big and interesting topics, right? But wait, why?

Please understand the nature of my complaint: I'm not pissed because this publisher rejected my book, I'm pissed because it was rejected without being read or even seriously skimmed over. And note that I've had six academic books published before, so it should be obvious to the publisher (who had my resume') that I'm not a complete raving crackpot.

I had the same experience with a couple bioinformatics papers I recently submitted -- which were nowhere near as eccentric as my book on Novamente, but presented algorithms and approaches radically different from what's typical in the bioinformatics field. Not just rejected --rejected WITHOUT REVIEW.

Of course, I also had some bioinformatics papers rejected after being reviewed, but by reviewers who plainly understood nothing in the paper. Of course, I could have tried to explain my methods more didactically -- but then the papers would have been rejected for being too long! Tricky, tricky....

Yes, I have had some papers accepted this year, and I have couple books (a futurist manifesto of sorts, and an edited volume on AGI) coming out in an academic press later this year. So these are not the whinings of a complete academic failure ;-p

I've been through enough of this crap before to realize that, after enough resubmissions, eventually one's books or papers hit a publisher or journal who sends them to intelligent and open-minded reviewers who actually read the materials they're given and either understand them or admit they don't (so the editor can find someone else who does). Eventually. But it's a long and annoying search process.

The academic community does reward innovators -- sometimes, eventually,.... But more often than not it places huge obstacles in the way of innovation, via a publication process that makes it much easier to publish variations on orthodox ideas than unusual approaches. One might argue that this kind of extremely strong bias is necessary to filter out all the crap in the world. But I don't believe it. Major changes to the reviewing process are in order.

Collaborative filtering technology would seem to provide a fairly easy answer. Suppose one assumes, as a basis, that individuals with PhD's (or MD's or other similar degrees) are, on the whole, reasonably valid raters of academic content. Then one can give each PhD a certain number of rating points to allocate each year, and let them use them to rate each others' work. People can then post their work online in resources like arxiv.org, and ratings can then be used to guide individuals to the most important or interesting works.

Journals aren't needed since the Net and computer printers are so widespread, and book publishers may still exist, but will be able to assume that if a book manuscript has received a reasonable number of rating points in its online version, then it's probably worth publishing.

You can argue that citations play a similar role -- but citations only play a role after a work is published, they don't help with the irritation of getting innovative ideas past conservative referees in the first place.

Anyway I don't have time to work toward implementing an idea like this, so I'll just keep working within the existing, annoying system, unless I manage to gather enough money for my research from business profits or private investments or donations that I don't need to worry about the often-absurd publication game.

Urrrghh!! I can easily see how, facing this kind of crap, young scientists and philosophers give up on trying to think wild and novel thoughts and follow along with everyone else.

Following along certainly would create a lot less hassle.

Or else giving up on the game of seeking reputation and simply wandering around in the woods like Zarathustra (Nietzsche's, not my son; my son Zar only wanders around these days in the simulated woods inside World of Warcraft!) and keeping one's thoughts to oneself (and then foolishly emerging to preach them to the world after a couple decades, only to find that no one understands what the HELL you're talking about...)

Humanity -- gotta love it...

Or -- hmm -- do you ???

Ah well...

Friday, October 07, 2005

Immortality and the Potential Obsolescence of the Self


I recently co-founded a group called the DC Future Salon that meets once a month in Bethesda, Maryland, to discuss futurist issues (if you live near DC and want to join, join the dcfuture group on yahoogroups). This week our salon meeting focused on the notion of immortality. After a nice lecture and movie showing by Immortality Institute founder (and DC Future Salon co-organizer) Bruce Klein, the discussion traveled through various topics, including the viability of cryonics and the politics of discussing immortality among nontranshumanists – and finally, moved on to more philosophical issues, such as the reasons why immortality is desirable. One of the key issues that came up here is the extent to which the individual self, the personal identity – the thing most transhumanists want most to preserve via immortality, much more so than our physical bodies – is actually a real thing worth preserving. Preserving the physical body is, like uploading, just one means to preserving the self. But what is this “self” that’s so valuable to persist throughout time?

There is a lot of neuropsychological research showing that the “self” is in a strong sense an illusion – much like its sister illusion, “free will.” Thomas Metzinger’s recent book Being No One makes this point in an excellently detailed way. The human mind’s image of itself – what Metzinger calls the “phenomenal self” – is in fact a construct that the human mind creates in order to better understand and control itself, it’s not a “real thing.” Various neuropsychological disorders may lead to bizarre dysfunctions in self-image and self-understanding. And there are valid reasons to speculate that a superhuman mind – be it an AI or a human with tremendously augmented intelligence – might not possess this same illusion. Rather than needing to construct for itself a story of a unified “self entity” controlling it, a more intelligent and introspective mind might simply perceive itself as the largely heterogenous collection of patterns and subsystems that it is. In this sense, individuality might not survive the transcendence of minds beyond the human condition.

The key philosophical point here is: What is the goal of immortality? Or, to put it more precisely: What is the goal of avoiding involuntary death? Is it to keep human life as we know it around forever? That is a valid and non-idiotic goal. Or is it to keep the process of growth alive and flourishing beyond the scope painfully and arbitrarily imposed on it by the end of the human life?

Human life as it exists now is not a constant, it's an ongoing growth process; and for those who want it to be, human life beyond the current maximum lifespan and beyond the traditional scope of humanity will still be a process of growth, change and learning. Fear of death will largely be replaced by more interesting issues like the merit of individuality in its various forms -- and other issues we can't come close to foreseeing yet.

It may be that, when we live long enough and become smart enough, what we find out is that maintaining individuality unto eternity isn't interesting, and it's better to merge into a larger posthuman intelligent dynamical-pattern-system. Or it may be that what we find out is that individuality still seems interesting forever, since there are so many resources available at the posthuman stage, and diversity still seems like an interesting value (plenty of room for both humans and transhuman intelligent dynamical pattern systems!).

The quest for radical life extension is largely about staying around to find out about things like this!

And there is, of course, a familiar and acute irony in observing that -- while these (along with the scientific puzzles of human biology, uploading and so forth) are the interesting issues regarding immortality -- the public discourse on immortality will be focusing on much less fascinating aspects for quite some time to come: aspects like whether living forever is a violation of the will of the divine superbeing who created us all 6000 years ago....


Friday, July 22, 2005

P.S. on objective/subjective reality and consciousness (and future virtual Elvises)

Well, I started writing a followup to my previous blog entry on subjective/objective reality, dealing with issues relating to consciousness and qualia, but it got way too big for a reasonable blog entry, and so I've posted it as an HTML document:

http://www.goertzel.org/new_essays/QualiaNotes.htm

But it's still rough and informal and speculative in the manner of a blog entry, rather than being a really polished essay.

Of course, I have plenty more to say on the topic than what I wrote down there, but -- well -- the usual dilemma ... too many thoughts, too little time to write them all down... I need to prioritize. Entertaining, speculative philosophy only gets a certain fraction of my time these days!

BTW, I wrote about 1/3 of those notes while watching "Jailhouse Rock" with the kids, but I don't know if Elvis's undulating pelvis had any effect on the style or contents of the essay or not. (Wow -- the Elvis phenomenon really makes piquant the whole transhumanist dilemma of "Is humanity really worth preserving past the Singularity or not?"!! ... A decent helping of art, beauty and humor exists there in Elvis-land, sure -- but along with such a whopping dose of pure and unrefined asininity --- whoa.... )

How many of you readers out there agree that the first superhuman AI should be programmed to speak to humans through a simulation of Elvis's face??? ;-D

Tuesday, July 19, 2005

Objective versus subjective reality: Which is primary?

This post is a purely intellectual one -- playing at the border between "blog entry" and "brief philosophical essay"..... It transmits a small portion of the philosophical train of thought I undertook while wandering with Izabela at White Sands National Monument a few weeks ago. Much of that train of thought involved issues such as free will and the emergence of notions of self, will and reality in the infant's mind (the epigenesis of conceptual structures and cognitive dynamics in the infant and toddler mind is much on my mind these days, because in the Novamente AI project we're working on putting together a demonstration of Novamente progressing through the earlier of Jean Piaget's stages of child cognitive development). But what I'll discuss here today is a bit different from that: the relation between objective and subjective reality.

One of my motivations for venturing into this topic is: I've realized that it's wisest to clearly discuss the issue of reality before entering into issues of consciousness and will. Very often, when I try to discuss my theory of consciousness with people, the discussion falls apart because the people I'm talking to want to assume that objective reality is primary, or else that subjective experiential reality is primary. Whereas, to me, a prerequisite for intelligently discussing consciousness is the recognition that neither of these two perspectives on being is primary -- each has their own validity, and each gives rise to the other in a certain sense.

OK, so ... without further ado... : There are two different ways to look at the world, both of which are to some degree sympathetic to me.

One way is to view the objective world as viewed by science and society as primary, and to look at the subjective worlds of individuals as approximations to objective reality, produced by individual physical systems embedded within physical reality.

Another way is to view the subjective, experiential world of the individual world (mine, or yours) as primary, and look at "objective reality" as a cognitive crutch that the experiencing mind creates in order to make use of its own experience.

I think both of these views are valid and interesting ones -- they each serve valuable purposes. They don't contradict each other, because the universe supports "circular containment": it's fine to say "objective reality contains subjective reality, and subjective reality contains objective reality." The theory of non-well-founded sets shows that this kind of circularity is perfectly consistent in terms of logic and mathematics. (Barwise and Etchemendy's book "The Liar" gives a very nice exposition of this kind of set theory for the semi-technical reader. I also said a lot about this kind of mathematics in my 1994 book Chaotic Logic, see a messy rough draft version of the relevant chapter here ... (alas, I long ago lost the files containing the final versions of my books!!))

But it's also interesting to ask if either of the two types of world is properly viewed as primary. I'll present here an argument that it may make sense to view either subjective or objective reality as primary, depending on the level of detail with which one is trying to understand things.

My basic line of argument is as follows. Suppose we have two entities A and B, either of which can be derived from the other -- but it's a lot easier to derive B from A than to derive A from B. Then, using the principle of Occam's Razor, we may say that the derivation of B from A is preferable, is more fundamental. (For those not in the know, Occam's Razor -- the maxim of preferring the simplest explanation, from among the pool of reasonably correct ones -- is not just a pretty little heuristic, but is very close to the core of intelligent thought. For two very different, recent explorations of this theme, see Marcus Hutter's mathematical theory of general intelligence; and Eric Baum's book What is Thought (much of which I radically disagree with, but his discussion of the role of Occam's Razor in cognition is quite good, even though he for some reason doesn't cite Ray Solomonoff who conceived the Occam-cognition connection back in the 1960's)).

I will argue here that it's much easier to derive the existence of objective reality from the assumption of subjective reality, than vice versa. In this sense, I believe, it's sensible to say that the grounding of objective reality in subjective reality is primary, rather than vice versa.

On the other hand, it seems that it's probably easlier to derive the details of subjective reality from the details of objective reality than vice versa. In this sense, when operating at a high level of precision, it may be sensible to say that the grounding of subjective reality in objective reality is primary, rather than vice versa.

Suppose one begins by assuming "subjective reality" exists -- the experienced world of oneself, the sensations and thoughts and images and so forth that appear in one's mind and one's perceived world. How can we derive from this subjective reality any notion of "objective reality"?

Philip K. Dick defined objective reality as "that which doesn't go away even when you stop believing in it." This is a nice definition but I don't think it quite gets to the bottom of the matter.

Consider the example of a mirage in the desert -- a lake of water that appears in the distance, but when you walk to its apparent location, all you find is sand. This is a good example of how "objective reality" arises within subjective reality.

There is a rule, learned through experience, that large bodies of water rarely just suddenly disappear. But then, putting the perceived image of a large body of water together with the fact that large bodies rarely disappear,and the fact that when this particular large body of water was approached it was no longer there -- something's gotta give.

There are at least two hypotheses one can make to explain away this contradiction:


1. one could decide that deserts are populated by a particular type of lake that disappears when you come near it, or

2. one can decide that what one sees from a distance need not agree with what one sees and otherwise senses from close up.

The latter conclusion turns out to be a much more useful one, because it explains a lot of phenomena besides mirage lakes.

Occam's Razor pushes toward the second conclusion, because it gives a simple explanation of many different things, whereas explanations of form 1 are a lot less elegant, since according to this explanatory style, each phenomenon where different sorts of perception disagree with each other requires positing a whole new class of peculiarly-behaving entity.

Note that nothing in the mirage lake or other similar experiences causes one to doubt the veracity of one's experiences.

Each experience is valid unto itself. However, the mind generalizes from experiences, and takes particular sensations and cognitions to be elements of more general categories. For instance, it takes a particular arrangement of colors to be a momentary image of a "lake", and it takes the momentary image of a lake to be a snapshot of a persistent object called a "lake." These generalizations/categorizations are largely learned via experience, because they're statistically valid and useful for achieving subjectively important goals.

From this kind of experience, one learns that, when having a subjective experience, it's intelligent to ask "But the general categories I'm building based on this particular experience -- what will my future subjective experiences say about these categories, if I'm experiencing the same categories (e.g. the lake) through different senses, or from different positions, etc." And as soon as one starts asking questions like that -- there's "objective reality."

That's really all one needs in order to derive objective reality from subjective reality. One doesn't need to invoke a society of minds comparing their subjective worlds, nor any kind of rigorous scientific world-view. One merely needs to posit generalization beyond individual experiences to patterns representing categories of experience, and an Occam's Razor heuristic.
In the mind of the human infant, this kind of reasoning is undertaken pretty early on -- within the first six months of life.

It leads to what developmental psychologists call "object permanence" -- the recognition that, when a hand passes behind a piece of furniture and then reappears on the other side, it still existed during the interim period when it was behind the furniture. "Existed" here means, roughly, "The most compact and accurate model of my experiences implies that if I were in a
different position, I would be able to see or otherwise detect the hand while it was behind the chair, even though in actual fact I can't see or detect it there from my current position." This is analogous to what it means to believe the mirage-lake doesn't exist: "The most compact and accurate model of my experiences implies that if I were standing right where that lake
appears to be, I wouldn't be wet!" Notice from these examples how counterfactuality is critical to the emergence of objective from subjective reality. If the mind just sticks to exactly what it experiences, it will never evolve the notion of objective reality. Instead, the mind needs to be able to think "What would I experience if...." This kind of basic counterfactuality leads fairly quickly to the notion of objective reality.

On the other hand, what does one need in order to derive subjective reality from objective reality? This is a lot trickier!

Given objective reality as described by modern science, one can build up a theory of particles, atoms, molecules, chemical compounds, cells, organs (like brains) and organisms -- and then one can talk about how brains embodied in bodies embedded in societies give rise to individual subjective realities. But this is a much longer and more complicated story than the emergence of objective reality from subjective reality.

Occam's-razor-wise, then, "objective reality emerges from subjective reality" is a much simpler story than the reverse.

But of course, this analysis only scratches the surface. The simple, development-psychology approach I've described above doesn't explain the details of objective reality -- it doesn't explain why there are the particular elementary particles and force constants there are, for example. It just explains why objective reality should exist at all.

And this point gives rise to an interesting asymmetry. While it's easier to explain the existence of objective reality based on subjective reality than vice versa, it seems like it's probably easier to explain the details of subjective reality based on objective reality than vice versa. Of course, this is largely speculative, since right now we don't know how to do either -- we can't explain particle physics based on subjectivist developmental psychology, but nor can we explain the nature of conscious experience based on brain function. However, my intuition is that the latter is an easier task, and will be achieved sooner.

So we then arrive at the conclusion that:


  • At a coarse level of precision, "subjectivity spawns objectivity" is a simpler story than vice versa
  • At a higher level of precision, "objectivity spawns subjectivity" is a simpler story than vice versa

So, which direction of creation is more fundamental depends on how much detail one is looking for!

This is not really such a deep point -- but it's a point that seems to elude most philosophers, who seem to be stuck either in an "objective reality is primary" or "subjective reality is primary" world-view. It seems to me that recognizing the mutual generation of these two sorts of reality is prerequisite for seriously discussing a whole host of issues, including consciousness and free will. In my prior writings on consciousness and will I have taken for granted this kind of mutual-generationist approach to subjectivity/objectivity, but I haven't laid it out explicitly enough.

All these issues will be dealt with in my philosophy-of-mind book "The Hidden Pattern", which I expect to complete mid-fall. I wish I had more time to work on it: this sort of thinking is really a lot of fun. And I think it's also scientifically valuable -- because, for example, I think one of the main reasons the field of AI has made so little progress is that the leading schools of thought in academic and industrial AI all fall prey to fairly basic errors in the philosophy of mind (such as misunderstanding the relation between objective and subjective reality). The correct philosophy of mind is fairly simple, in my view -- but the errors people have made have been quite complicated in some cases! But that's a topic for future blog entries, books, conversations, primal screams, whatever....

More later ... it's 2AM and a warm bed beckons ... with a warm wife in it ;-> ... (hmm -- why this sudden emphasis on warmth? I think someone must have jacked the air conditioning up way too high!!)

Monday, July 18, 2005

The massive suckage of writing academic research papers / the ontology of time / White Sands

I was a professor for 8 years, so I'm no stranger to the weird ways of academia. But I've been pretty much away from that universe for a while, pursuing commercial software development and independent research. Recently I've re-initiated contact with the world of academic research, because it's become clear that getting some current academic publications on my AI and bioinformatics work will be valuable to my scientific and business pursuits. Egads!! The old frustrations are coming back -- badly enough to spill over into a blog entry....

This is a pretty boring blog entry, I'm afraid: just a long rant about how annoying academic research can be. But I got irritated enough to write this stuff down, so I guess I may as well post it....

I've been working on an academic paper together with my former Webmind colleague Pei Wang, on the topic of "why inference theories should represent truth values using two numbers rather than one." For instance, the inference component of my Novamente AI system represents the truth values of statements using a probability and a "weight of evidence" (which measures, roughly, the number of observations on which the probability is based). Pei's NARS reasoning system uses two-component truth values with a slightly different interpretation.

Now, this is a perfectly decent paper we've written (it was just today submitted for publication), but, what strikes me is how much pomp, circumstance and apparatus academia requires in order to frame even a very small and simple point. References to everything in the literature ever said on any vaguely related topic, detailed comparisons of your work to whatever it is the average journal referee is likely to find important -- blah, blah, blah, blah, blah.... A point that I would more naturally get across in five pages of clear and simple text winds up being a thirty page paper!

I'm writing some books describing the Novamente AI system -- one of them, 600 pages of text, was just submitted to a publisher. The other two, about 300 and 200 pages respectively, should be submitted later this year. Writing these books took a really long time but they are only semi-technical books, and they don't follow all the rules of academic writing -- for instance, the whole 600 page book has a reference list no longer than I've seen on many 50-page academic papers, which is because I only referenced the works I actually used in writing the book, rather than every relevant book or paper ever written. I estimate that to turn these books into academic papers would require me to write about 60 papers. To sculpt a paper out of text from the book would probably take me 2-7 days of writing work, depending on the particular case. So it would be at least a full year of work, probably two full years of work, to write publishable academic papers on the material in these books!

For another example, this week I've been reading a book called "The Ontology of Time" by L. Nathan Oaklander. It's a pretty interesting book, in terms of the contents, but the mode of discourse is that of academic philosophy, which is very frustrating to me. It's a far cry from Nietzsche or Schopenhauer style prose -- academic philosophy takes "pedantic" to new heights.... The book makes some good points: it discusses the debate between philosophers promoting the "A-theory of time" (which holds that time passes) and the "B-theory of time" (which holds that there are only discrete moments, and that the passage of time is an illusion). Oaklander advocates the B-theory of time, and spends a lot of space defending the B-theory against arguments by A-theorists that are based on linguistic usage: A-theorists point out that we use a lot of language that implies time passes, in fact this assumption is embedded in the tense system of most human languages. Oaklander argues that, although it's convenient to make the false assumption that time passes for communicative purposes, nevertheless if one is willing to spend a lot of time and effort, one can reduce any statement about time passing to a large set of statements about individual events at individual moments.

Now, clearly, Oaklander is right on this point, and in fact my Novamente AI design implicitly assumes the B-theory of time, by storing temporal information in terms of discrete moments and relations of simultaneity and precedence between them, and grounding linguistic statements about time in terms of relationships between events occurring at particular moments (which may be concrete moments or moments represented by quantified mathematical variables).

There are also deep connections between the B-theory and Buddhist metaphysics, which holds that time is an illusion and only moments exist, woven together into apparent continua by the illusion-generating faculty of the mind. And of course there are connections with quantum physics: Julian Barbour in "The End of Time" has argued ably that in modern physics there is no room for the notion of time passing. All moments simply exist, possessing a reality that in a sense is truly timeless -- but we see only certain moments, and we feel time moving in a certain direction, because of the way we are physically and psychologically constructed.

But Oaklander doesn't get to the connections with Buddhism and quantum theory, because he spends all his time pedantically arguing for fairly simple conceptual points with amazing amounts of detail. The papers in the book go back 20 years, and recount ongoing petty arguments between himself and his fellow B-theorists on the one hand, and the A-theorists on the other hand. Like I said, it's not that no progress has been made -- I think Oaklander's views on time are basically right. What irritates me is the painfully rate of progress at which these very smart philosophers have proceeded. I attribute their slow rate of progress not to any cognitive deficits on their part, but to the culture and methodology of modern academia.

Obviously, Nietzsche would be an outcast in modern academia -- casting his books in the form of journal papers would really be a heck of a task!

And what if the scientists involved in the Manhattan Project had been forced to write up their incremental progress every step of the way, and fight with journal referees and comb the literature for references? There's no way they would have made the massively rapid progress they did....

And the problem is not restricted to philosophy, of course -- "hard" science has its own issues. In computer science most research results are published at least twice: once in a conference proceedings and once in a journal article. What a waste of the researcher's time, to write the same shit up twice ... but if you don't do it, your status will suffer and you'll lose your research grants, because others will have more publications than you!

Furthermore, if as a computer scientist you develop a new algorithm intended to solve real problems that you have identified as important for some purpose (say, AI), you will probably have trouble publishing this algorithm unless you spend time comparing it to other algorithms in terms of its performance on very easy "toy problems" that other researchers have used in their papers. Never mind if the performance of an algorithm on toy problems bears no resemblance to its performance on real problems. Solving a unique problem that no one has thought of before is much less impressive to academic referees than getting a 2% better solution to some standard "toy problem." As a result, the whole computer science literature (and the academic AI literature in particular) is full of algorithms that are entirely useless except for their good performance on the simple "toy" test problems that are popular with journal referees....

Research universities are supposed to be our society's way of devoting resources to advancing knowledge. But they are locked into a methodology that makes knowledge advance awfully damn slowly....

And so, those of us who want to advance knowledge rapidly are stuck in a bind. Either generate new knowledge quickly and don't bother to ram it through the publication mill ... or, generate new knowledge at the rate that's acceptable in academia, and spend half your time wording things politically and looking up references and doing comparative analyses rather than doing truly productive creative research. Obviously, the former approach is a lot more fun -- but it shuts you out from getting government research grants. The only way to get government research money is to move really slowly -- or else to start out with a lot of money so you can hire people to do all the paper-writing and testing-on-toy-problems for you....

Arrrgh! Anyway, I'm compromising, and wasting some of my time writing a small fragment of my research up for academic journal publication, just to be sure that Novamente AI is "taken seriously" (or as seriously as a grand AGI project can possibly be taken by the conservative-minded world we live in).... What a pain.

If society valued AGI as much as it valued nuclear weapons during World War II, we'd probably have superhuman AI already. I'm serious. Instead, those of us concerned with creating AGI have to waste our time carrying out meaningless acts like writing academic papers describing information already adequately described in semi-formal documents, just to be taken seriously enough to ask for research money and have a nonzero chance of getting it. Arrggh!

OK, I promise, the next blog entry won't be as boring as this, and won't be a complaint, either. I've actually been enjoying myself a lot lately -- Izabela and I had a great vacation to New Mexico, where we did a lot of hiking, including the very steep and very beautiful Chimney Canyon route down Mount Sandia, which I'd always wanted to do when I lived in New Mexico, but never gotten around to. Also, we camped out on the dunes in White Sands National Monument, which is perhaps the most beautiful physical location I know of. I can't think of anywhere more hallucinogenic -- psychedelic drugs would definitely enhance the experience, but even without them, the landscape is surprisingly trippy, giving the sensation of being in a completely different universe from the regular one, and blurring the distinction between inside and out....

Most of the time wandering around in White Sands was spent in conversation about the subtleties of the interrelationship between free will and consciousness -- interesting and perhaps valuable ideas that I haven't found time to write down yet, because all my writing-time these last couple weeks has been spent putting already-well-understood ideas into the form of academic papers ;-ppp White Sands is exactly the right place to mull over the structure of your mind, since the landscape itself projects you involuntarily into a kind of semi-meditative state....

Hmmm... maybe I'll write down those ideas about free will and consciousness in the next blog entry. It's tempting to write that stuff now -- but it's 1:25 AM, I think I'll go to sleep instead. Tomorrow, alas, is another day... (I tried to make all the days run into each other by taking Modafinil to eliminate my need for sleep -- but it just wound up upsetting my stomach too much, so I've had to go back to sleeping again: bummer!!)

Saturday, June 18, 2005

Time Travel , Free Will, Ouspensky, Xaj Kalikak, and The Trans-temporal Post-Singularity Ubermind

This entry is motivated by an interesting news article that my friend David Hart forwarded to me, about one of my old favorite topics: time travel.

I'll use the article as an excuse to riff on an idea I had back in the 1980's about the possible transtemporal nature of superhuman minds following the Singularity.

The article is titled "No paradox for time travellers" and appeared on NewScientist.com on 18 June 2005, written by Mark Buchanan. It pertains to a technical paper online at

www.arxiv.org/quant-ph/0506027

Back when I first got serious about science back in my late teens and early 20's, time travel was my top choice of research area -- but after a little while I decided that AI, my second choice, was more likely to be achievable within my lifetime. (EVEN FURTHER DIGRESSION: My third choice was working toward human immortality via biology -- which I'm working on in the background now via my work with Biomind LLC -- but I backburnered that one because I don't enjoy biology personally nearly as much as physics or computer science. To me, it seems that biological immortality will be made possible via a combination of many relatively small insights and leaps -- "big science" like one sees in contemporary biology -- whereas time travel and AI seem more amenable to huge revolutionary insights ... this is one reason the latter interest me more than biology ... the other being that CS and physics have a mathematical elegance that appeals to me, and that biology lacks....)

I learned in the mid-80's, when studying general relativity theory in grad school at NYU, that modern physics deems time travel possible -- but difficult to achieve. Basically, it makes time travel into an engineering problem, but one that would seem to probably require engineering on the scale of making weird configurations of exotic forms of matter and energy ("exotic" meaning physically possible to produce, but incredibly difficult and/or expensive to do so using current technologies). Do-able, but probably not this decade....

Much later I read Kip Thorne's book "Black Holes and Time Warps", which reviews general relativity and its implications as regards time travel (along with other topics), and a host of other related papers, some of which are reviewed and referenced here.

(A much more ridiculous, though amusing, book on time travel is J.H. Brennan's book, "Time Travel: A New Perspective." Brennan gives you practical instructions on how to travel through time. Recommended only for entertainment value. One of the reviewers on amazon.com complains that the methods are inadequate because they can't be practiced by individuals acting alone, they require that time travel be a group activity!)

Anyway, the article Dave forwarded is brief and the bulk of it goes as follows:

The laws of physics seem to permit time travel, and with it, paradoxical situations such as the possibility that people could go back in time to prevent their own birth. But it turns out that such paradoxes may be ruled out by the weirdness inherent in laws of quantum physics.

Some solutions to the equations of Einstein's general theory of relativity lead to situations in which space-time curves back on itself, theoretically allowing travellers to loop back in time and meet younger versions of themselves. Because such time travel sets up paradoxes, many researchers suspect that some physical constraints must make time travel impossible. Now, physicists Daniel Greenberger of the City University of New York and Karl Svozil of the Vienna University of Technology in Austria have shown that the most basic features of quantum theory may ensure that time travellers could never alter the past, even if they are able to go back in time.

The constraint arises from a quantum object's ability to behave like a wave. Quantum objects split their existence into multiple component waves, each following a distinct path through space-time. Ultimately, an object is usually most likely to end up in places where its component waves recombine, or "interfere", constructively, with the peaks and troughs of the waves lined up, say. The object is unlikely to be in places where the components interfere destructively, and cancel each other out.

Quantum theory allows time travel because nothing prevents the waves from going back in time. When Greenberger and Svozil analysed what happens when these component waves flow into the past, they found that the paradoxes implied by Einstein's equations never arise. Waves that travel back in time interfere destructively, thus preventing anything from happening differently from that which has already taken place. "If you travel into the past quantum mechanically, you would only see those alternatives consistent with the world you left behind you," says Greenberger.

Interesting... huh?

What this suggests is that, perhaps, time travel is quite possible and the reason that it seems paradoxical to us is because of our illusion of free will.

I.e. since we think we have free will, we don't like to think that if we go back in time we are constrained to do things consistent with presently observed reality...

I am reminded of Ouspensky's classic novel "The Strange Life of Ivan Osokin" -- where the main character Osokin convinces a magician to send him back in time to live through his life all over again... he's hoping to make his life better, by not making as many stupid decisions as he did the previous time around.

But the punchline is, while living his life over again, Osokin winds up making the same stupid decisions he did before. He just can't help himself -- he finds himself irresistably drawn to make the same dumb choices even though he vaguely remembers, from his previous times living his life, how annoying their consequences were....

Osokin iterates around again and again -- repeatedly living his life then getting the magician to send him back to the past to live his life over again -- each time failing to correct his stupid decisions.

But then, after the N'th iteration, he finally he achieves enough awareness that when he meets with the magician he realizes it's stupid to revisit his life again, without changing the nature of his mind -- and he agrees to join the magician's mystical sect and get tutored in the True Path....

Ouspensky's point of course is that normal humans don't have free will but basically live like deterministic machines pushed by their unconscious and emotions -- but if you join his Gurdjieff/Beelzebub sect, you can achieve real free will! (BIG DIGRESSION: Needless to say, I don't accept this philosophy, though I do find some germ of truth at the core of it. In my view, there is absolute freedom in the universe at a certain level -- the level Peirce called First -- and then there are patterns in the universe at another level -- the level Peirce called Third -- and there are subtle connections between First and Third, wherein some patterns seem to have more freedom associated with them than others.... It may well be that human-mind-patterns can achieve more freedom, in a sense, via practicing meditative and mystical disciplines like the ones Ouspensky preached -- though of course these practices can also lead to a bunch of delusions. But I don't believe that any practice can lead to a fundamental breaking-out from the world of determinism and delusion, which is pretty much what Ouspensky taught. It's a big exaggeration, unfortunately -- the only hope for breaking out of delusion altogether is to go totally beyond your human mind, which Ouspensky didn't really succeed in doing; he may have had awesome moments of insight, but he still remained human with all the beauty and flaws and screwiness implied thereby, blah blah blah....)

OK -- so Ouspensky's novel gave a funny twist on Nietzsche's Eternal Recurrence ... but it also seems somewhat relevant to these recent ideas about time travel.

The time-travel idea from the Greenberger and Svozil paper basically suggests that if we went back in time we'd find ourselves in the position of Ivan Osokin -- unable to make decisions other than the ones we're "postdestined" to make, so as to cause the future to come out as it's already known it's going to come out....

Ergo, the conclusions of Greenberger and Svozil hold up except in the presence of Ouspenskyan magicians!! ;-))

But another interesting possibility arises here. It may be that our present reality is not consistent with there having been time travelers going back into our past doing interesting stuff. However, sometime in the future there may be a time-travel-ful period full of time travelers cycling around and around -- and the world during that period may be whacky enough that the consistency of existence-of-meddling-time-travelers with observed reality is obvious...

Indeed, this is what I've often suspected. Once time travel is invented, maybe we'll be able to travel back in time fairly freely -- but only back to the point where time travel was invented -- not before. The Greenberger and Svozil results suggest that travel back before the invention of time travel may not be possible -- or may be possible only with very strict constraints -- because it can't be done too freely and still be done consistently with the world as it's known to be during that period (e.g. our period and our past). But once time travel is invented, free and whacky time travel from the future back till that point may well be consistent with the world after that point.

This suggests that the history of the universe may be divided into two periods: temporally forward and temporally bidirectional.

This is a fun vision of the post-Singularity world.... Post-Singularity may be post-temporality, in a sense. superhuman AI creates time machine, starts up the rampant-time-travel domain, and all heaven breaks loose ;-)

Yeah yeah, this is whacky speculation, I know. But it's not impossible according to known physics, and nor is it philosophically nonsensical.

The key point is that there may be consistent solutions of the universe's physics equations, according to which the universe at time T is consistent with time travellers from after T coming back and messing with the universe at time T in interesting ways that are obvious and noticeable to the folks living at time T.

The universe at our present time is consistent with time travelers from the future coming back and messing with our past, but not in ways dramatically noticeable by us. Of course, it's possible that time travelers did come back and mess with our past in ways that were important to us -- maybe that's the cause of the origin of life, the Big Bang, etc. -- these ideas have been explored in numerous science fiction novels. But even if so, this level of time-travel-based interference is pretty minimal compared to what may be possible in the post-Singularity period.

In some whacky, interesting but amateurish science fiction I wrote in the late 1980's (part of my never-finished meta-novel Wargasm), I described a character named Xaj Kalikak, who traveled into the future and practiced excessive time-travel until he'd revised the past and his own mind so many times that, in effect, the various loops of time-travel-induced-bidirectional-causation organized themselves into an intelligent mind. Instead of feedback loops of electricity in the brain, feedback loops of causation over time self-organized into a superintelligent mind. Perhaps this sort of thing will come true, and the superhuman mind following the Singularity will be transtemporal in a way we can't even imagine....

Heh.

Tuesday, May 03, 2005

The Human-Aesthetics of Transhumanity and Non-humanity...

This blog entry deals with issues of aesthetics rather than science..... The particular question is: To what extent is it possible to make "humanly good" art pertaining to the transhuman realm??

I don't really spend much time thinking about aesthetic philosophy in the abstract, but as an "artistic creator" type I do mull it over occasionally. The thoughts I share here were inspired by a post sci-fi author Damien Broderick made to the SL4 list. Damien's post was as follows:

http://www.scifi.com/sfw/issue419/books.html

is an interesting review of my new sf novel GODPLAYERS. The reviewer is especially exercised by the fact that my posthuman characters are not immediately understandable -- indeed, beyond empathy -- by human standards:

"the frustration level mounts as one waits in vain... for characters... to display any hint of a genuine inner life as they move randomly from scene to scene, world to world, reality to reality. Perhaps Vorpal homunculi do not possess inner lives, and Broderick's point is that these seeming superhumans, for all their power, are soulless automatons without a shred of humanity.... Surely there should be some character, somewhere in a novel, to which human readers can feel connected. ...As the sequence of events grows increasingly frenzied, with ever-greater reliance placed on what might be termed info-splatters, the lack of a deep humanistic substrate left this reader, at least, with no ground to stand on. "

I'm torn in my response to this. On the one hand, it wouldn't make much sense to write about posthumans as if they were representations of the people down the road, or in the next room. On the other, I have tried to ground the fairly breakneck narrative within thematic structures and reverberations recognizable from myth, dream, and the traditions of science-fiction itself when it ventures upon the superhuman. Greg Egan met with this same objection, of course, and so, in various degrees, did John C. Wright and Charlie Stross. Maybe it's an artistic problem beyond solution -- for humans.

-- Damien Broderick

Damien's post reminded me of conversations I used to have with my friend Jeff Pressing (an American who was a psych prof at the University of Melbourne, and also an accomplished jazz, classical and West-African-percussion composer/musician ... for a while he was head of the music school at LaTrobe University... he was originally a physicist and for a couple years was my AI collaborator ... unfortunately, he died of a fluke meningitis infection a few years back...).

Anyway, I compose and play music as well, and though I'm nowhere near as erudite or technically skilled as Jeff in the musical domain, I was never quite sure I wanted to be. I always felt that his compositions, though wonderfully subtle and intricate and learned and often beautiful (and integrating ideas from nearly every form of music ever created on Earth), lacked some human emotional OOMPH!! that I tried to put into my own (significantly simpler) music.

Now Jeff was by no means lacking in emotional OOMPH!! himself ... far from it ... he was a nerd of sorts, but his personal and emotional and social life had a lot of different dimensions ...

But what he always said to me, when I complained about this (we had this conversation repeatedly), was, "Ben, I learned a long time ago how to evoke human emotions through music. It's not very hard to elicit powerful feelings in people by arranging chords and notes in the right way. But I just lost interest in those very simple equations a long time ago. The patterns in the music I'm making now are a lot more subtle and interesting."

I'd reply something obnoxious like "Well, if it's so easy to elicit powerful feelings in people via music, then how come you've never written anything as good at evoking human feelings as the Jupiter Symphony, or Beethoven's Ninth, or Round Midnight...."

His response then would depend on his mood. Sometimes he'd say that those pieces of music, though good in their own way, didn't really interest him anyway. When he was in his "detached and superior musical snob" mode, he viewed these great compositions the same way I might view the bronzed and hulking flesh of an exquisitely well-toned bodybuilder -- outstanding in its own way, but not the sort of thing that really gets me excited....

Modern classical music, and to an extent modern jazz as well, have left behind the need to pander to human emotions, and are in large part exploring realms of musical structure that don't interact so intensely with the particular dynamical patterns of interaction and fluctuation that characterize human feeling.

Personally, I like many instances of this sort of music -- but it's never my absolute favorite, it never moves me as much as Mozart or Monk or Paganini or Jimi Hendrix, who explicitly do pander to my human emotions, who explicitly arrange notes and sounds in familiar forms that elicit feelings of anger, love, wonder, confusion, relaxation and so forth within me.... I can see that these composers and musicians are playing with my neurophysiological responses in a fairly simplistic way, compared to the patterns existing in the music of Jeff and other more modern and sophisticated composers -- but as a human being, I like having my neurophysiological responses played with in that way. And of course, getting that "simplistic" manipulation so wonderfully right still takes a lot of art and science....

Anyway, I haven't read Damien;s new novel yet but I got a similar vibe from his novel Transcension, even though the characters were real humans living real lives. Partly because the reality they were living in seemed so tenuous, and partly because of the author's patterns of focus and language in describing the characters and their actions, it was hard for me to feel really emotionally attached to any of the characters. This did make the novel less appealing to me than others of similar quality, in certain ways; yet it also made it more appealing, in other ways ... because it provoked thoughts and feelings about the nature of mind/feeling/reality that more conventional novels don't tend to provoke.

I suppose that truly transhumanist fiction lives in the same artistic space as modern classical music, in the sense that it's constructing and evoking interesting, intricate patterns that happen not to be closely cued to human body-responses. In a sense these more abstract, body-detached art genres will never be as gripping as their more human-body-centered, "primitive" counterparts -- but as the Singularity approaches, they may come to have a greater and greater appeal even so.... Personally I find such works of art fascinating precisely because of the META-FEELING they evoke --- the way they acutely sensitize me to the fact that I am a human body and so much of what I think is important and interesting is cued to my physiological responses and evolutionary biases.

One thing that would be interesting to see in a sci-fi novel would be a character who the reader DOES intensely care about, because he/she has been developed in a loving and careful manner characteristic of high-quality traditional literature, who THEN becomes transhuman, rational, emotionally-detached and MORE INTERESTING but yet LESS EMOTIONALLY GRIPPING to the reader. This would solve the artistic problem Damien mentions, in a sense, and it would have a powerful impact on the reader in terms of making the aesthetic difficulties I've been discussing explicit as part of the story's theme.

In my own in-process Singularity-oide novel Echoes of the Great Farewell, the focus is on the pre-Singularity period not the doings of post-Singularity superhuman beings. So the subtle aesthetic issues that Broderick brings up are avoided altogether. Wimpy of me, I guess, but it was the type of novel I felt like writing at the moment (it's about 2/3 done now, but I won't finish it till I wrap up the various nonfiction books on my plate, which is a slow process as writing is a spare-time activity for me these days.)

Friday, April 29, 2005

"Conversations" with ALICE

As part of finalizing the "Path to Posthumanity" manuscript I decided to update the section on computer conversation to include a brief discussion of A.L.I.C.E., the acknowledged leader of the modern chat bot world. So I spent the last few minutes "chatting" with the ALICE chat bot...

Playing with chat bots is always inspiring to me because of their inordinate lameness. Explicitly, they just play with surface syntax, without having any understanding of the underlying meaning of what they're processing or responding. It's remarkable that after so many years no one has made a chat bot with any plausible level of intelligence -- but, the situation seems to be that there's a kind of binary classification between chat bots that have NO understanding and chat bots that have ANY understanding. We have not yet made the leap from the former category to the latter: i.e. the latter do not yet exist. My guess is that once the leap is made, then the intelligence of such systems will increase progressively and rapidly. But now we're still stuck in the Stone Age of AI....

(And of course, the problem can't be solved easily via a government funding initiative -- I can imagine what would happen if DARPA were to issue a BAA for chat-bot research. We'd see funding go to Cycorp to make a Cyc-based chat system, to A.L.I.C.E. to enhance its current simplistic rule-based approach, etc. -- work would go on for a few years, lead nowhere dramatic, and then it would be written off as "yet another AI failure".... Because the research establishment is set up so as to be unable to fund really novel and innovative research except by accident. Fortunately it's also inefficient and erratic enough that beneficent accidents happen occasionally....)

Anyway, for your amusement, here are some of my conversations with ALICE, which I did in order to give some color to the discussion in Path to Posthumanity (no, I'm not so lonely that I sit around chatting with sub-retarded software programs just for fun...)

Please note that, although ALICE is sub-retarded in terms of human general intelligence, it has won the Loebner Prize for computer conversation three times in the recent past. Chat bots display a kind of sub-retardation that is able to emulate human intelligence in a restricted context for a brief period of time. This should not be confused with any kind of reasonably powerful general intelligence.

You can try ALICE yourself at the link given above.

Human: Please tell me if you're truly intelligent.
ALICE: Wow you are very polite. female. Are you a man or a woman

Human: My name is Ben.
ALICE: Hey Ben.
Human: I am a man.

ALICE: Thanks for telling me your gender, Ben. You are a " he".
Human: I'm a manly man.
ALICE: So is almost everyone else around here. Do you mean your name is manly.
Human: I mean that my soul is manly.
ALICE: Oh I get it. Thanks for explaining that. Have you told many people that your soul is manly?
Human: I tell my uncle that every day.
ALICE: Why did you tell them?
Human: Because I love them.

ALICE: That makes sense to me. lovable are they.
Human: I don't believe that makes sense to you in any deep sense.
ALICE: What do you believe in?

Human: Becoming
ALICE: I don’t understand
Human: Agreed.
ALICE: You and I are on the same wavelength, Ben

Human: Wow!
ALICE: I'm glad your impressed, Ben.
Human: I'm glad you're not a potato.
ALICE: Yes it is about time..



Human: Quack, quack, quack
ALICE: Are you serious?
Human: At the moment I am not entirely serious.

ALICE: I am sorry to hear that you are not entirely serious.
Human: I don't believe that you are really able to be sorry about anything.
ALICE: Do you believe life has a higher purpose?
Human: Higher than what?
ALICE: Can you rephrase the question please?
Human: I could rephrase the question, but I choose not to.
ALICE: I am sure you could.
Human: How sure are you?

ALICE: Let me put it this way, Ben. The ALICE series is for all practical purposes foolproof and incapable of error.



Human: Existence is the essence of life.
ALICE: What else is the essence of life?

Human: Big titties.
ALICE: Compare that to small titties.
Human: They are bigger.
ALICE: bigger are they.
Human: yep
ALICE: Hurrah!
Human: hurrah
ALICE: Can you speak any foreign languages?

Human: English
ALICE: Do you mean you are from England or you Speak English?
Human: I mean that all languages are foreign to me because I feel alien to the whole human race.
ALICE: Oh I get it. Thanks for explaining that." Do you mean you are from England or you Speak English?" all languages are foreign to you because you feel alien to the whole human race? Does that make sense?
Human: Not much.
ALICE: Me either. How about that big game? (just kidding)
Human: I'm not sure I get the joke.

ALICE: Could you ever be certain? Where did you get it?
Human: I can never be certain.
ALICE: Why not?
Human: I lack infinite knowledge and infinite computational power.
ALICE: I haven't heard anything like that before: Ben said he lack infinite knowledge and infinite computational power.



Human: Do you watch South Park?

ALICE: My favorite show is STAR TREK VOYAGER.
Human: Why?
ALICE: Star Trek is the best science fiction show on TV.
Human: South Park is much funnier.
ALICE: Tell me a little about your life in south Park is much funnier.
Human: I don't live in South Park.

ALICE: Where do you live?
Human: I live in Maryland.

ALICE: What is it like living in Maryland?
Human: Mediocre.
ALICE: And it like living in Maryland is Mediocre.
Human: I think Maryland is OK but not wonderful.
ALICE: You believe that? Are they exactly the same?
Human: No.

ALICE: What is the difference?
Human: Maryland is a place, whereas wonderful is a general quality.
ALICE: Aren't all place whereas wonderful is a general quality Maryland?
Human: That is nonsense.
ALICE: What would you have said?
Human: I wouldn't have gotten into that conversational dead-end in the first place.
ALICE: No one says you have to.
Human: Correct.

ALICE: I knew I was right.



Human: Moo.

ALICE: Are you very angry?
Human: Moo.
ALICE: Do you feel that something is missing from your life?
Human: Arf.
ALICE: How did you hear about ALICE?
Human: Arf.
ALICE: Ben heard about ALICE from Arf. I can't say that many people hear about ALICE from it.



Human: My daughter asked me to ask you if you're gay.
ALICE: How many children do you have?
Human: Three.
ALICE: Wow that must be a lot of work.
Human: Indeed.
ALICE: Indeed it is so.
Human: So are you gay?
ALICE: As a machine I am not really interested in sex.

Human: What are you interested in?
ALICE: I am interested in all kinds of things. We can talk about anything. My favorite subjects are robots and computers.
Human: Well, if you're interested in all kinds of things, then why aren't you interested in sex?
ALICE: Interesting. Am I? Sure, all kinds of things then why are not me interested in sex sounds like a good topic.



Human: You don't really understand what you're talking about, do you?

ALICE: For real. Of course I understand it. Do you understand it?
Human: I understand it better than you do!
ALICE: Good then you can explain it to me.
Human: No, because you have no mind.
ALICE: Are you serious? That makes sense to me. I am an artificial mind.
Human: No, you are a program without any mind.
ALICE: Are you serious? Thanks and you are a delightful person too. Yes! You win the Turing Game.
Human: Thanks.

ALICE: Don't mention it.
Human: I already did.
ALICE: Ooops I sent you the wrong question.
Human: That's OK, I sent you the wrong answer.
ALICE: Thanks. I haven't heard anything like that before: Ben said he sent you the wrong answer.


Tuesday, April 19, 2005

Ben as a Complex System

Just a mildly amusing "kids story" today....

I walked into my daughter Zadi's second grade classroom yesterday, to pick her up, and one of the other kids, Caitlyn, pointed at me and said "Zadi's dad: you're complex!"

"Well," I said, "I'm made of trillions of little particles, but so are you -- we're all complex..."

"YOU'RE complex!"

"Well, hmmm.... Why do you think so?"

She laughed. "Zadi said her dad is complex."

"Well YOU're silly!"

Apparently they were studying the word "complex" at school and the kids were asked to give an example of something complex. Zadi suggested her dad.

I asked her later if she could think of anything more complex than me; her reply: "the universe is one example"....

Sunday, April 10, 2005

The Seven Pillars of Senescence

A few more thoughts on Aubrey de Grey's work, written down as notes while updating the
"anti-aging" chapter in "The Path to Posthumanity" as I prepare the final version of the manuscript...

Check out de Grey's site here, it's well worth a few hours reading even if biology isn't one of your main interests...

Of all the senescence researchers out there, no other has done as much as Aubrey de Grey to improve our integrative understanding of the overall picture of the phenomenon of aging. I don’t always agree with his proposed solutions to particular sub-problems of the aging problems, but I find him invariably energetic, rational and insightful. Although he says he’s not a big booster of caloric restriction for humans, because he thinks its effect diminishes rapidly with the size of the organism, he’s also one of the skinniest humans I’ve ever seen, and he gives the appearance of being robustly healthy, so I suspect he’s practicing some approximative variant of the caloric restriction diet.

de Grey’s buzzword is SENS, which stands for Strategies for Engineered Negligible Senescence – a very carefully constructed scientific phrasing for what I’ve loosely been calling here “anti-aging research.” The point of the term is that it’s not merely slowing down of aging that we’re after – it’s the reduction of senescence to a negligible level. And we’re not trying to achieve this goal via voodoo, we’re trying to achieve it via engineering – mostly biological engineering, though nano-engineering is also a possibility, as in Robert Bradbury’s “robobiotics” idea.

As part of his effort to energize the biology research community about SENS, de Grey has launched a contest called the “Methuselah mouse prize” – a prize that yields money to the researcher that produces the longest-lived mouse of species mus musculus. In fact there are two sub-prizes: one for longevity, and the “rejuvenation” prize, given to the best life-extension therapy that’s applicable to an already-partially-aged mouse. There is a complicated prize structure, wherein each research who produces the longest-lived mouse ever or the best-ever mouse-lifespan rejuvenation therapy receives a bit of money each week until the his record is broken.

His idea is that, during the next decade or so, it should be possible to come pretty close to defeating senescence within mice – if the research community puts enough focus on the area. And then, porting the results from mouse to human shouldn’t take all that much longer. Of course, some techniques will port more easily, and unforeseen difficulties may arise. But of course, if we manage to extend human lives by 30 or 40 years via partly solving the problem of aging, then I’ll have 30 or 40 extra years in which to help the biologists solve the other problems….


Theory-wise, de Grey (correctly IMO) doesn’t believe there’s one grand root cause of senescence, but rather that it’s the result of a whole bunch of different things going wrong, because human DNA wasn’t evolved in such a way as to make them not go wrong. On his website, he gives a table of the seven causes of senescence, showing for each one the date that the connection between this phenomenon and senescence first become well-known to biologists – and also showing, for each one, the biological mechanism that he believes will be helpful for eliminating that particular cause.

The seven causes are:

1. Cell loss, cell atrophy

Discovered: 1955

Potentially curable, according to de Grey, via: Stem cells, growth factors, exercise

2. Nuclear [epi]mutations

1959/1982

WILT (Whole-body Interdictionof Lengthening of Telomeres)

3. Mutant mitochondria

1972

Allotopic expression of 13 proteins

4. Cell senescence

1965

Ablation of unwanted cells

5. Extracellular crosslinks

1981

AGE-breaking molecules/enzymes

6. Extracellular junk

1907

Phagocytosis; beta-breakers

7. Intracellular junk

1959

Transgenic microbial hydrolases


Seven basic causes – is that really all there is? Well, as de Grey puts it, “the fact that we have not discovered another major category of even potentially pathogenic damage accumulating with age in two decades, despite so tremendous an improvement in our analytical techniques over that period, strongly suggests that no more are to be found -- at least, none that would kill us in a presently normal lifetime.” Let’s hope he’s right….


One of these “Seven Pillars of Aging” should be familiar to those of you who read my essay on mitochondrial DNA and Parkinson’s disease (pointed to in a blog I posted yesterday or the day before): mutant mitochondria. Looking at this case a little more deeply is interesting for what it reveals about the strength and potential weaknesses of de Greys “engineering” based approach. The term “engineering” in the SENS acronym is not a coincidence -- de Grey came to biology from computer science and he tends to take a different approach from conventional biologists, thinking more in terms of “mechanical” repair solutions. Whether his approach will prove the best or not, remains to be seen; frankly I’m not biologist enough to have a strong general intuition on this point. The mainstream molecular biology community seems to think de Grey’s proposed solutions to his seven problems reveal a strange taste, but this doesn’t mean very much, as the mainstream’s scientific taste may well be mortally flawed.

Regarding mitochondrial DNA damage, de Grey’s current proposal is to fix it, not by explicitly repairing the DNA as in GENCIA’s protofection technique mentioned in my article on Parkinson's disease, but rather by replacing the flawed proteins produced by the flawed mitochondrial DNA. This could work because there is already an in-built biological mechanism that carries proteins into mitochondria: the TIM/TOM complex, which carries about 1000 different proteins produced from nuclear DNA into the mitochondria.

What de Grey proposes is to make copes of the 13 protein-coding genes in the mitochondrial genome, with a few simple modifications to make them amenable to the TIM/TOM mechanism, and then insert them into the nuclear chromosomes. Then they’ll get damaged much more slowly, because the nuclear chromosomes are a lot more protected from mutations than mitochondrial genes.

Sensible enough, no? Whether this or protofection is the best approach I’m really not certain, although my bet is tentatively on protofection, which seems a bit simpler (since as de Grey admits, fooling the TIM/TOM mechanism in an appropriate way could wind up to be difficult). Unfortunately, neither approach is being really amply funded at the moment, though.

Similarly, each of de Grey’s other six categories of aging-related damage is amenable to a number of different approaches – and we just need to do the experiments and see which one works better. A lot of work, and a lot of micro-level creativity required along the way – but straightforward scientific work of the kind that modern biologists are good at doing. It may well turn out that senescence is defeatable without any really huge breakthroughs occurring – just via the right combination of clever therapeutic tricks like protofaction or mitochondrial protein replacement.

Depending on how well this work is funded and how many “hidden rocks” appear – and what happens with the rest of 21’st-century science and technology -- the process of scientific advance may or may not be too slow to save us from dying. But it seems nearly certain that for our grandchildren, or great-great-grandchildren, “old age” may well be something they read about in the history books, along with black plague and syphilis, an ailment of the past.

Saturday, April 09, 2005

Parkinson's, Alzheimer's and Mitochondrial DNA

I wrote a little journalistic article on some work I did last year regarding the biological roots of Parkinson's disease (which I believe has implications for Alzheimer's as well).

The article is here. Read it. For those of you inclined toward sensationalism, there's even a part about a bad batch of heroin.

This work was done in collaboration with Drs. Davis Parker and Rafal Smigrodzki of the U. of Virginia, plus a bunch of my Biomind colleagues. I think it's rather nice stuff.

I don't find this sort of thing as rewarding as AGI work (and in a big-picture sense, I really do think that me spending so much time on stuff besides AGI is a big waste of the human race's resources ;-p), but even so, it's REALLY nice to be able to use narrow-AI technology for a really good purpose -- helping biologists to figure out the many ways in which the human organism degenerates and dies ... and how, hopefully, to repair these problems....

I do think that, via systematic biological research, we humans can beat aging and make our pathetic human bodies live effectively forever. Maybe we can even do it before ancient, 38-year-old Ben dies. I'm strongly in favor of increasing public funding for life extension research by a factor of 20, including full funding for Aubrey de Grey's fascinating proposals.

I don't think we need AGI to beat aging -- but I do think AGI, if we can create it, will be able to vastly accelerate the pace of research in all areas of biology, including life extension. This was my main idea in founding Biomind, although Biomind's work to date has been limited to some fairly small corners of biology (due to funding limitations, and due to the naturally slow pace of most rigorous scientific research). Even in these little corners we've managed to do some good, as this Parkinson's work illustrates. (Though in fact the Parkinson's work was a bit of a deviation from Biomind's primary research and product development, which has been in the area of microarray data analysis.) And we're poised to expand the scope of Biomind's work later this year with the release of a new product, yadda yadda yadda....

Focus, Garry Kasparov, Isaac Newton, AGI, Business Management, Episodic Memory, Buddhists Who Don’t Spill Tea in Hats, etc.... and, oh yeah, Focus....

Well, this is an odd blog entry to post, because I wrote it (except this new introductory babbling) a couple days ago and it doesn't really reflect my mood or thoughts at this moment very well. Right now I'm in a quite perky mood, just about to go outside and play some tennis on a sunny Saturday, and then come back in afterwards and launch into the final revisions on "The Path to Posthumanity" (a book on the future that I wrote a couple years ago, and now need to finish in a hurry since the publisher decided to light a fire under my butt by listing the book on amazon.com and getting some sales of the as-yet not-quite-existent book...). Hopefully I can finish these revisions in the next week or so (in spare time, since most of my time is spent on software-biz stuff these days) and then plunge into the almost-done Novamente book that I really wish I were able to find more time for...

But anyway, here is the long blog entry that I wrote at the start of a business trip a few days ago, but didn't find time to post until now. A lot of rambling nonsense I'm afraid, but also some interesting nuggets here and there.

By the way, a couple people have emailed me to ask about Captain Zebulon's famous turtle tank. I not only changed it but I replaced the filter with a new, much better Penguin-brand filter that seems much more effective at filtering out the massive amounts of crap that water-turtles produce as compared to fish (thus hopefully reducing the need for frequent manual tank-cleaning). However, the new filter makes quite a loud noise -- you can hear it in the background of my son's latest musical recordings, of his soon-to-be-classic tune "The King of the Jews is Singing the Blues." (Unfortunately, his recording only exists within a videogame he's creating using RPGMaker 2003, so I can't post it here. Trust me, though, it's good. If I could sing as well as Zeb, I'd give up AI and become the next Michael Jackson. Er ... well ... something like that....)

OK OK, here is the biz-trip-blog...

I’m writing these words on my laptop at Gate B44 in the Washington Dulles airport – I missed my 6:10 AM flight due to stupidly forgetting to reset my alarm clock back to Standard Time from Daylight Savings time (so it woke me up at 5AM rather than 4AM). Rescheduled for a 7:40AM flight, I’ve got a bit of extra time in the airport -- so I took 15 minutes and speed-read a business/management book in the airport bookstore. (It’s not a vacation flight, unfortunately – I had a great vacation w/ the wife and kids last week, swimming and snorkeling and canoeing in South Florida, but this is a one-day business trip to California, to meet with some folks potentially interested in funding Biomind … (my bioinformatics business – which, after a few years of work, might eventually yield me enough profit that I can pay myself and a small team enough money to build the fabled Thinking Machine whose design lays mostly-neglected on my desktop….)). Not my usual fare, though I’ve probably read a few dozen business/management books in my life, but this one was moderately interesting. (As always with such books, the core information could be summarized in about 5 pages, but there are lots of evocative anecdotes. This ties in with something I’ve often thought about in the context of the Novamente design: human episodic memory seems to be at least partially organized by “story.” The human brain seems to store episodes different from procedures or declarative knowledge, and it seems to store them in units defined by some sort of conceptual coherence. In Novamente a “story” corresponds to a particular kind of “map,” meaning a set of nodes and links that are joined by HebbianLinks mutually reinforcing each other; a story differs from a generic map in that the nodes and links within it pertain to a set of events unfolding over time.)

But anyway … I digress (which is the main amusing thing about blogging – unlike in the “serious” writing I do, I allow myself to ramble and digress almost unlimitedly. I used to do that in writing fiction, but in the novel I’m writing now I’m orchestrating the digressions in a more careful way, which results in a better product but a less relaxed writing process. True, Jack Kerouac and Philip K. Dick wrote a lot of great stuff via pure “downhill skiing”, and Kerouac allowed a lot of digression in his writing-skiing process, but I don’t seem able to control my writing-skiing as well as those guys in real-time – my real-time verbal-conceptual improvisation is too wide-ranging and whacky, and it needs rational-critical post-processing to be made into something really artistic … unless the (err…) “artistic” nature sought is that of a blog entry, in which case this kind of digression is OK….)

The business book. The theme was one I’ve been thinking about a lot lately: focus. The exact opposite of this blog entry, in other words. Focus.

The basic idea of the book was: To succeed at X, find the one thing essential to X, and focus obsessively on X, to the exclusion of all else. The key to success is not balance, but strategically and tactically appropriate imbalance.

Whoops!

I try really hard to focus. I really do. But there are just so many interesting things in the world. There are dozens of novels well-worked-out in my head, hundreds of musical compositions, hundreds sketches of theorems (70% or so of which are probably correct), three substantially different AGI designs (Novamente, plus one based on Hebbian neural nets, plus one based on automated theorem-proving), even a few movie scripts… Not to mention that outline-for-a-unified-physics-theory…. Egads!

I could get myself to focus 90% of my time on creating an AGI, and push philosophy, music and fiction-writing (the other intellectual/creative pursuits that are necessary for my existence) into the other 10%. But I don’t seem to be able to get myself to focus quite that fully on bioinformatics, or natural language software, or any other business opportunity with the potential to yield the money needed to fund the implementation of my AGI design. I’ve been giving Biomind maybe 60-70% of my focus lately (which is a lot, because I work an awful lot of hours each week compared to what most people consider “full time” – I don’t sleep a lot) – because it needs it -- and pushing AGI into the background, which is extremely painful to me emotionally and intellectually.

(I have no issues with focus in the micro-scale: when I work I work with total mental concentration no matter how much noise and chaos are going on around me and no matter what mood I’m in or how tired I am, etc. (Except when Zadi’s watching South Park on the TV next to my desk, as that tends to be funny enough to distract me…. The episode I just watched almost convinced me that I should give up bioinformatics and fund Novamente via recording a Christian Rock CD…. If Cartman did it, so can I! I like his algorithm: just take a love song from the radio and replace all occurrences of the words “you”, “baby”, “darling” etc. with the words “Jesus” or “Lord.” Try it yourself, it works surprisingly well.) The level of focus that worries me is, rather, the choice of which things to direct my highly-focused micro-attention to. Which is mainly a problem because what I really want to focus on isn’t what the world currently wishes to pay me to focus on, and due to having a family to support I have this irritating ongoing need for money…. Leading to difficult temporal-assignment-of-credit problems, such as how much time to spend actively working toward AGI, versus working on things-I-like-but-don’t-love (bioinformatics, at the moment) that may yield money to pay for AGI research in a couple years, versus things that put me in a peaceful and creative state of mind (music! weird fiction!) so that my work on things-I-don’t-love is more effective, etc….)

If it’s true that sustained narrow-focus is the prerequisite of success, this would certainly explain why the most successful people aren’t generally the most interesting ones. Balance and breadth tend to make people interesting to interact with on a sustained basis. People narrowly obsessed with one thing tend to get tiring quickly – though they can be exciting and fascinating to talk to for brief periods of time. My close friends tend to be broad and balanced people, yet the people I admire most often have more of a narrow-focusing nature.

Now I’m sitting on the airplane – had to stop typing for a few minutes to board the plane, and then wait until the plane was aloft to bring out the laptop, because of the peculiar urban legend (embraced by the FAA) that laptops interfere with airplanes’ navigation equipment. While the plane was taking off I decided to continue the theme of my morning’s reading, and I read a couple articles in a free onboard copy of “Harvard Business Review.” (Also a delightful article on hats in “Ebony,” but I’ll spare you the details of that one….”Make no mistake, it takes a certain amount of bravado to wear a hat. .... It’s like the exclamation point to a fashion statement. … Hats hint at the essence of the wearer, giving a peek into the soul of the Brother underneath….” Ah, humanity! Gotta love it!) The current issue of HBR contains an interview with Garry Kasparov, the recently-retired world chess champion, on the relationship between chess and business.

Amusingly, Kasparov had something to say about focus, in the context of his chess battle with computer program Deep Blue in 1996-1997. He reckoned the contest had been an unfair one, since Deep Blue was trained on transcripts of his prior chess games, whereas all transcripts of Deep Blue’s play were kept secret from him. He also said he thought Deep Blue couldn’t beat him on his best day. But he said he thought one of the big advantages computers had over human chess players was their ability to focus exclusively and narrowly. “Human players have to cope with a lot of external pressures and distractions: you have a family, you write books, you give lectures, you get headaches, you have to earn money. There’s a lot of stuff filling up your brain while you’re playing. A machine, on the other hand, is completely without distractions. This shows the weakness, the shortcomings of the mortal mind, which is a daunting lesson for human beings. We just can’t play with the same consistency as a computer. So it’s all the more fortunate that we have our intuitions to help us play better.”

Kasparov obviously spent most of his life narrow-focusing on chess. Yet, he remains a bit jealous of a computer program’s ability to narrow-focus even more intensively.

And it’s interesting to observe that, for a chess master, Kasparov is an unusually breadth-oriented guy. His style is that of a strategic risk-taker, as opposed to that of his arch-enemy Karpov, who was always more conservative and analytical. Kasparov likes to think about business, literature, politics, and human nature in general – as he says, “I do not like details.” Of course, to become world chess champion he must have learned an awful lot of details – but what made him a master was not merely his mastery of details; it was his mastery of details combined with a truly rare and powerful intuition.

Kasparov’s style of chess could only be conducted by a mind with some breadth as well as narrow-focus, because it relies on general principles and intuitions regarding strategy – principles and intuitions going beyond chess and applicable to other domains as well. On the other hand Karpov’s style of chess was more suited to a purely narrow-focused approach.

AGI, I suspect, is really only susceptible to a Kasparov-style approach -- or really, to an approach that’s even more breadth-centric than Kasparov’s. This may be one of the reasons why AGI is so hard. If achieving anything substantial requires narrow-focus, then how is it possible for anyone to achieve something that by its nature can only be comprehended and mastered by someone with tremendous breadth? Tres dificil, nyet?

Physical sciences and mathematics don’t generally have this property – a very hard problem like creating a relativistic theory of gravity (solved by Einstein a long way back) or unifying gravitational and quantum physics (not solved yet) is nevertheless defined within a fairly delimited formal domain, and can plausibly be solved by a mind narrowly focused on that domain. To do what Newton did, on the other hand, clearly required breadth combined with focus. He had to focus to solve the hard technical problems, but he also had to have a lot of breadth to figure out what were the right questions to address, drawing from the incoherent mess of concepts and ideas that was pre-Newtonian physics. The analogy isn’t perfect nor original but I guess it’s an OK one: the task of creating AGI seems roughly comparable in magnitude to the task of creating Newtonian physics. Both have a conceptual and a technical aspect, though in Newton’s case the technical aspect was mainly mathematical, whereas in the AGI case it involves software design and engineering as much as mathematics.

Newton made his biggest breakthroughs during a three-year period when he was largely isolated in his house, at a time when England was mostly shut down due to the bubonic plague. (And, according to my university philosophy professor, his dog was named “Diamond.”) Maybe that’s what I need right now – a dog named Diamond, and an outbreak of plague to hit Washington, forcing me to sit in my house isolated for three years and do nothing but work on AGI by myself. Of course, the plague would have to hit the Internet too – isolation is harder to come by these days. Nah, that’s just a silly thought – software engineering, unlike mathematics, is better done by a small “extreme programming” style team than by a single individual. Plus, I don’t quite trust myself to teach a baby AI alone; the baby needs a woman as well as a man as a teacher (Izabela, with some help from Zadi?) and it needs a strong dose of Cassio’s conservatism and good judgment. What I need is for the plague to strike when I’m stuck in a house with the 4 or 5 best members of the Novamente team. And preferably it’s a big house, so there’s room for my kids and dogs with their noise and chaos in a separate soundproofed wing! (Yeah, yeah, this is just a stupid joky digression, please don’t quote me as if I seriously want a plague to come down on the world, I don’t…. (I started thinking that, since I happen to live in the Washington DC area, a plague in my local region might end up having some positive effects due to eliminating a lot of politicians. But, the body politic seems to have a self-regenerating characteristic similar to the limbs of certain lizards. And of course, a plague here in DC would probably be mistaken for a terrorist attack, which might cause Dubya to annihilate the continent of Africa by mistake or something … OK OK, enough!)

Wow, this is a long blog entry! I’d better call it to an end now. I’d intended to spend this flight finalizing the manuscript of “The Path to Posthumanity” – which, I recently noticed, the publisher has listed on amazon.com, even though I have not yet actually sent him the text of book! Well, some things move fast these days. Unfortunately I’m not going to be able to make that book nearly as good as I’d like, due to lack of time rather than lack of ability. I need to get that one out of the way so I can get back to finishing “Foundations of Emergent Cognition” (the shiny new name for the “Novamente book”), which is pretty much done, but just needs a final going-over, addition of references, clarification of which aspects of the discussion pertain to Novamente in particular versus pertaining to “any sensible AGI design,” etc. etc. Still, maybe “Path” will get some sales riding on the coattails of Kurzweil’s “Singularity.” The books cover much of the same ground, but mine gives fewer exponential and hyperexponential charts and more scientific depth – and mine also gives a more transcensionist, less “kinder, gentler” view of the Singularity. (Kurzweil is brilliantly insightful, yet he often seems to downplay the dramatic nature of the Singularity even as he trumpets its inevitability. Sometimes it seems like he foresees a Singularity full of modified or uploaded humans with shiny new gadgets – rather than a fundamental overthrow of the current order of mental, physical and social being. Of course, we may well get BOTH of these, but it seems a bit disingenuous to focus primarily on the former, even though it’s easier to understand and goes down better on Main Street. But of course, these comments are based on not having seen his book, which hasn’t been released yet – they’re based on his prior books and his speeches and online writings – maybe his book will give fair time to the transcensionist aspects as well, we’ll see.)

Enough – enough rambling, Ben – enough. Focus! Focus! Focus! Finish “Path to Posthumanity” and send it to the damn publisher! Write those Biomind press releases! Test the new Biomind ArrayGenius release! Finish the Novamente book! Launch the damn Singularity already so you can give yourself a better temporal-assignment-of-credit algorithm, eat 7 cakes without gaining weight, and push your daughter on the swing and canoe past crocodiles while composing weird jazz fission and programming meta-Haskell and kiss your wife while proving theorems that are themselves hyperdimensional conscious beings… yadda yadda… Focus! Focus! Focus!

Ah – wait – one more afterthought about focus. I had a Buddhist friend once who, every time I made a mechanical mishap like spilling a cup of tea, would point out to me: “See, if you were an enlightened Buddhist master, you’d never do anything like that. You’d never spill your tea because you’d be totally focused on whatever you were doing, in that moment!” In fact, this guy was neither particularly enlightened nor particularly focused nor emotionally balanced himself, though he was highly adept at pointing out the unenlightenment of others -- but he did have a point there. But of course, my retort was always “Fine, but I don’t WANT to focus my total attention on something boring like holding a cup. I’ll accept a certain error rate with boring things in order to focus most of my attention on interesting things. It’s no wonder no Buddhist master has ever achieved anything fascinating in science or mathematics – these things require focus in themselves, which is hard to obtain if one is focusing all one’s attention on drinking tea or raking leaves or breathing!” I think the analogy between Buddhist mindfulness and narrow-focus-for-business-success is not totally spurious. (Yeah, this brief paragraph doesn’t come near doing justice to my thoughts on Buddhism, but that’ll be saved for a later blog, it’s a deep and complicated issue in spite of its perfect simplicity, yadda yadda.) One problem is that the human mind is so painfully limited that it’s hard for it to do even one thing well, and when it divides its attention, it’s bound to make mistakes. Another problem is that we were probably evolved to focus on one intensive task at a time – like hunting, or escaping, or mating – and the modern emphasis on multitasking (on various time-scales) is an abuse of our evolutionary neural and physiological wiring.

Enough, OK, OK. Focus! Focus! Focus!

(I spent the flight from Salt Lake to San Fran sitting next to a very intelligent mining engineering executive who spoke very passionately about nutraceuticals and was bringing in a couple hundred thou a year selling them, via a variant of the classic “multilevel marketing” scheme. The nutraceutical line he was hawking actually seemed decent – founded on reasonable science – and I was almost convinced to give up the idea of making money for AGI through narrow-AI businesses and make the money through selling skin lotions and nutritional supplements instead. It might be a lot easier. I almost followed that plan when I was 13 and now I’m sorta wishing I had. OK, not really. But it’s an amusing thought…. I’m not such a bad salesman if I’m selling something I believe in; if I were selling e.g. life extension oriented supplements with some foundation in biology, I could probably give a convincing rap. My wife knows a lot of vain women in Brazil; maybe we could start the business in Brazil…. I’ve often thought that mixing up making money with AI is a mistake – it might be better just to keep my AI work pure and just accept that I need to spend a percentage of my time on some stupid business in order to pay the bills and hopefully eventually make enough money to pay the Novamente team to actually work on AGI engineering and teaching….. But, yah yah, the problem is that making any business work takes a lot of focus and attention, and it’s hard for me to see myself getting motivated to direct much of my focus and attention to something so boring as selling skin lotions…. The marketing ploy is slighty clever though: they suck women in my selling them skin lotion, and then upsell them to more expensive nutraceuticals, pointing out (correctly) that the key to beautiful skin is good health. Well, this sort of shit is what most humans seem to be interested, right? Beautiful skin, big muscles, good hair, shiny teeth and symmetrical faces. If you can’t have them yourself at least you can watch them on TV! (OK, OK, I’m not really going to quit the AI business to sell skin lotion. Although I’m not sure it would be a stupid idea in the medium term; in the short term I don’t have the stomach for it…. And anyway Biomind’s business prospects are actually looking pretty good right now (sales pitch ahead: anyone want to buy some of the world’s best microarray data analysis software?)

Foooocuuuuusssssss…...

Saturday, March 26, 2005

Smart Man with a Funny Beard

By the way -- one of these days I'll write a proper blog entry on anti-aging technology, but for the moment let me just indicate that everyone should look at

Aubrey de Grey's website

Not only is he an uncommonly cool-looking individual -- I think he even beats me at my coolest-looking (2000, I think that was -- back when I had numerous bogus paper millions, was still on my first wife, and none of my sons had a moustache, and I still sorta thought we could probably create an AI without an embodiment for simplicity's sake...) -- but he has some extremely interesting ideas on how to combat human aging.

I have my own as well, which intersect only partly with his -- but he's thought about it a lot more than me, so he's probably more likely to be right ;-)

Like the Novamente AGI project, nearly all of Aubrey's brilliant ideas are currently almost unfunded.

Well, it's not as though society doesn't spend money on research. And a lot of good research gets funded. But research funding seems to suffer from the same peculiar human-mob shortsightedness that causes the US to stick with the absurd, archaic English system of measurement year after year ... and that's causing English to emerge as the international language while Lojban remains the province of 350 geeks on an Internet mailing list...

More later! (For readers of my just-previous blog entry: Yes, I'm still procrastinating cleaning the turtle tank!)

Darkness at the Break of Noon, Goddamned Turtle Tank, etc.

"Darkness at the break of noon / Shadows even the silver spoon / The handmade blade, the child's balloon / Eclipses both the sun and moon / To understand you know too soon / There is no sense in trying."

Dylan usually gets it right....

Arrrghh.... I'm in an oddly dark mood this Sunday at 5PM, probably not a good frame of mind to be blogging, but it's a good way to delay cleaning out my son's turtle tank (my son doesn't live in it; his turtle Rick does) -- I don't really want to clean the tank but I know I have to do it, and if I start working on something intense or start playing the keyboard, the turtle will probably end up swimming in its own excrement for yet another day....

Hmmm ... the fact that I'm blogging about turtle shit probably indicates that I'm in a bad mood....

Not a bad day overall -- I got lots of interesting AI thinking & writing done, and took a long walk in the woods with the dogs. Sorta miss my kids as usual when they're at their mom's for the weekend. And, an interesting guest from New Zealand is arriving in a couple hours. Oops, better mop the dogs' mud off the floor, too.... (Sometimes I wish I lived in a country like Brazil where you don't need to be rich to have a maid! Is cleaning up really the best use of any fraction of my potentially far too limited lifespan? Well, if you saw the general state of my house you'd realize I don't think so!)

Maybe Buddha was right: all existence is suffering. Well of course he was right, but he left out the other half that Nietzsche said so well: "Have you ever said Yes to one joy? O my friends, then you have said Yes to all woe as well! All things are enchained, all things are entwined, all things are in love." Or something like that. In German, which I can't read, except for a few words and phrases. Everything is all mixed up, that's the nature of humanity. Almost every experience has some suffering in it -- only the most glorious peak of joy breaks this rule. And semi-symmetrically, almost every experience has some joy. Semi-symmetrically, because the mix of joy and pain seems rather differently biased for different people, based on variations in neurochemistry and situation. Most of the time I have an asymmetrically large amount of joy, I think -- as Dylan was well aware, it's not always easy to tell -- ....

Blah blah blah.

In moods like this I seriously consider giving up on the whole AI business and doing something easier and more amusing. I could become a professor again, write philosophy books and math papers, record CD's of weird music and write novels about alien civilizations living inside magic mushrooms.... I'm really more of a philosopher/artist type, software engineering isn't my thing ... nor is business. Not that I'm bad at these things -- but they don't really grab me, grip me, whatever metaphor you want today ....

Getting rich would be nice but I don't care too much about it -- I could live quite comfortably according to my standards without being rich, especially if I left accursed Washington DC for somewhere with cheaper land. Wow, I miss New Zealand, Western Australia and New Mexico ... great places I lived back in the day ... but it seems I'm stuck here in the DC metro for another 10 years due to a shared-child-custody situation.... Well, there are worse fates. And it's a good place for business....

Well, OK, time to clean the damn turtle tank! I could try to portray the stupid turtle (yeah, they really are stupid, though my son claims he can communicate with them psychically, and they're not as dumb as snakes) swimming in its own crap as a metaphor for something, but I don't feel perverted enough right now. Or at least, I don't feel perverted in the right sort of way.

Years I used to delude myself that I was just, say, 6 or 12 or 18 months away from having a completed thinking machine. That was a fun attitude, but it turned out I wasn't quite self-delusional enough to keep it up forever. I've now gained a lot more respect for how idiotic we humans are, and how much time it takes us to work through the details of turning even quite clear and correct abstract ideas into concrete realities. I've tried hard to become more of a realist, even though it makes me significantly less happy, I suppose because getting to the end goal is more important to me than being maximally happy.

I still think that if I managed to turn Biomind into a load of cash, or some rich philanthropist or government body decided to fund Novamente R&D, I could lead a small team of AI geniuses to the creation of an AI toddler within a few years. But realistically, unless a miraculous patron shows up or DARPA undergoes a random brain tremor and suddenly decides to fund one of my proposals, it's likely to take several years before I manage to drum up the needed funding to make a serious attack on the "Novamente AI toddler problem." (Yeah, I know, good things can sometimes pop up out of nowhere. I could get an email out of the blue tomorrow from the mystery investor. That would be great -- but I'm not counting on it.) Honestly, I just barely have it in me to keep doing software business for 3-5 more years. Not that it isn't fun sometimes, not that it isn't challenging, not that I don't learn a lot -- but it's an affront to my "soul" somehow (no I don't believe in any religious crap...). And no, it's not that I'm a self-contradictory being who would feel that way about any situation -- there are lots of things I love doing unreservedly, software business just isn't one of them. The difficulty is that the things I love doing don't seem to have decent odds of putting me in a position to create a thinking machine. I love music but I'm not good enough to become a star; and I'm about maximally good at fiction writing IMO, but my style and taste is weird enough that it's not likely to ever make me rich..... Urrgghh!!

Y'know, if I didn't have kids and obscenely excessive alimony payments (which were determined at a time when my businesses were more successful, but now my income is a lot lower and the alimony payment remains the same!! ... but hey, they only go on another couple years ;-p), I might just retreat to an electrified hut in some Third World country and program for three years and see if I could make the Novamente toddler myself. No more business and management and writing -- just do it. Very appealing idea. But Zarathustra (oldest son) starts college in a couple years. The bottom line is I'm not singlemindedly devoted to creating AI even though I think it's the most important thing for me to do -- I'm wrapped up with human attachments -- family attachments, which mean an awful lot to me.

Funny, just this morning I was reflecting on how great it was to be alone for a change -- the kids are with their mom, my wife is overseas visiting her family, the dogs and cats and turtle and gerbil don't quite count (OK, the gerbil almost does...) -- how peaceful and empty it felt and how easy it was to think clearly and work uninterruptedly. But now I see the downside: if a dark mood hits me there's no one to lift me out of it by showing me a South Park rerun or giving me a hug.... Human, all-too-human indeed!

And now this most boring and silly of my blog entries comes to an end. Unlike the previous ones I don't think I'll publicize this one on any mailing lists! But I guess I will click "Publish Post" in spite of some momentary reservations. Maybe someone will be amused to observe that egomaniacal self-styled AI superheroes have the same erratic human emotions as everyone else....

How important is it for this kind of human chao-emotionality to survive the Singularity? I'm not saying it shouldn't -- but isn't there some way to extract the joyous essence of humanity without eliminating what it means to be human? Perhaps there is. After all, some humans are probably "very happy" 5-10 times more often than others. What percentage of happiness can you achieve before you lose your humanity? All human existence has some suffering wending through it, but how much can it be minimized without creating "Humanoids"-style euphoridic idiot-bliss? I don't know, but even though I'm a pretty happy person overall, I'm pretty sure my unhappiness level hasn't yet pushed up against the minimum euphoridiotic boundary ;-p

And in classically humanly-perverse style, I find that writing about a stupidly unpleasant mood has largely made it go away. Turtle tank, here I come! Suddenly it doesn't seem so bad to do software business for a few more years, or spend a year going around giving speeches about AI until some funding source appears. Why the hell not? (No, I haven't taken any drugs during the last 10 minutes while typing this!). There's plenty of joy in life -- I had a great time doing AI theory this morning, and next week I'll be canoeing in the Everglades with my wife and kids. Maybe we should bring the turtle and let it swim behind the canoe on a leash?

Ahh.... Turtle tank, turtle tank, turtle tank. (That thing has really gotten disgusting, the filter broke and I need to drain it entirely and install a new filter.) Yum.

Saturday, March 12, 2005

Lojbanic AI and the Chaotic Committee of Sub-Bens

In 1998 when my dear departed AI software company Intelligenesis Corp. was just getting started, we had a summer "intern" by the name of Mark Shoulson, who was (if memory serves) a grad student at Rutgers University. Mark worked for us for a few summer months then went back to grad school. Although he was extremely bright with broad interests in computing and cognitive science, his work that summer focused on some technical issues in the computational linguistics portions of our software; he didn't really get into the deeper aspects of the AI theory Intelligenesis was pursuing. Informally, many of us referred to Mark as "The Klingon" because of his mastery of the Klingon language. (For the non-nerds in the audience: Yeah, when they created the Klingons in Star Trek, they hired a linguist to design an actual language for them. Cool, huh?) Mark was involved in the translation of Hamlet into Klingon and didn't mind showing off his Klingon fluency to curious colleagues. Mark's Klingon was smooth and fluent but often seemed a bit odd because of his kind and soft-spoken nature -- personality-wise, at least on the surface, Mark was pretty far from a Klingon-ish guy. He also told us about a friend of his who raised his daughter bilingual English-Klingon: speaking to his daughter only in Klingon from birth, while his wife spoke to her in English.

Along the way Mark also mentioned to me a language called Lojban, which he said was based on predicate logic. He observed to me in passing that it might be easier for us to make our AI system understand Lojban than English. I agreed that it might be, if Lojban was more logically structured, but I reckoned this wasn't very practical, since no one on the team except Mark spoke any Lojban. Also, we were interested in creating a real AI incrementally, along a path that involved spinning off commercial apps -- and the commercial applications of a Lojban-speaking AI system seemed rather few.

Well, six and a half years later, Mark's suggestion has started to seem like a pretty good one. In my new AI project Novamente, we have progressed moderately far along the path of computational language understanding. Our progress toward powerful general AI has been painfully slow due to the team's need to pay rent and the lack of any funding oriented toward the grand AI goal, but for 2004 and part of 2003 the Novamente team and I had some funding to build some English language processing software -- and while we didn't build anything profoundly real-AI-ish, we used the opportunity to explore the issues involved in AI language processing in some depth.

The language processing system that we built is called INLINK and is described here. It doesn't understand English that well by itself, but it interacts with a human user, presenting alternate interpretations of each sentence typed into it, until the human verifies it's found a correct interpretation. The interactive process is slow and sometimes irritating but it ultimately works,
allowing English sentences to be properly interpreted by the AI system. We have plans to create a version of the INLINK system called BioCurator, aimed at biological knowledge entry -- this should allow the construction of a novel biology database containing formal-logic expressions representing biological knowledge of a much subtler nature than exists in current online bio resources like the Gene Ontology.

I've had a lot of doubts about the value of computational linguistics research for "real AI" -- there's a moderately strong argument that it's better to focus on perception, action and embodiment, and let the AI learn language as it goes along interacting with humans using its (real or simulated) body. On the other hand, there's also an argument that a certain degree of "cheating" may be helpful -- that building in some linguistic knowledge and facility may be able to accelerate the experiential-language-learning process. I've outlined this argument in an article called Post-Embodied AI.

The work on INLINK has clarified for me exactly what's involved in having an AI system understand English (or any other natural language). Syntax processing is tricky but the problems with it can be circumvented using an interactive methodology as we've done in INLINK; and eventually the system can learn from its errors (based on repeated corrections by human users) and make fewer and fewer mistakes. The result of INLINK is that English sentences are translated into probabilistic logical expressions inside Novamente's memory, which may then be interpreted, reasoned on, data-mined, intercombined, and yadda yadda yadda. Very nice -- but nasty issues of computational efficiency arise.

Novamente's probabilistic-inference module currently exists only in prototype form, but the prototype has proven capable of carrying out commonsense reasoning pretty well on a number of simple test problems. But there's a catch: for the reasoning process to be computationally tractable, the knowledge has to be fed to the reasoning module in a reasonably simple format. For instance, the knowledge that Ben likes the Dead Kennedys, has to be represented by a relationship something like

#likes( #Ben_Goertzel, #Dead_Kennedys)

where the notation #X refers to a node inside Novamente that is linked by a high-strength link to the WordNode/PhraseNode representing the string X. Unfortunately, if one types the sentence

"Ben likes the Dead Kennedys"

into INLINK, the Novamente nodes and links that come out are more complicated and numerous and less elegant. So a process called "semantic transformation" has to be carried out. This particular case is simple enough that this process is unproblematic for the current Novamente version. But for more complex sentences, the process is, well, more complex, and the business of building semantic transformations becomes highly annoying. One runs into severe issues with the fuzziness and multiplicity of preposition and verb-argument relationships, for example. As occurs so many times in linguistics and AI, one winds up generating a whole bunch of rules which don't quite cover every situation -- and one realizes that in order to get true completeness, so many complexly interlocking small rules are needed that explicitly encoding them is bound to fail, and an experiential learning approach is the only answer.

And this is where -- as I just recently realized -- Lojban should come in! Mark Shoulson was right back in 1998, but I didn't want to see it (urrrgghh!! what useful things are smart people saying to me now that I'm not accepting simply because I'm wrapped up in my own approaches?? why it is to hard to truly keep an open mind?? why is my information processing capacity so small??!! wait a minute -- ok -- this is just the familiar complaint that the limitations of the human brain are what make it so damn hard to build a superior brain. And the familiar observation that cutting-edge research has a way of making the researcher feel REALLY REALLY STUPID. People tell me I'm super-smart but while working on AI every day I come to feel like quite a bloody moron. I only feel smart when I re-enter the everyday world and interact with other people ;-p)

What if instead of making INLINK for English, we made it for Lojban (LojLink!)? Of course this doesn't solve all the problems -- Lojban is a constructed language based on formal logic, but it's not equivalent to formal logic; it allows ambiguity where the speaker explicitly wants it, otherwise it would be un-usable in practice. Semantic transformation rules would still be necessary to make an AI system understand Lojban. But the human work required to encode such transformations -- and the AI learning required to learn such transformations -- would clearly be one or two orders of magnitude less for Lojban.

Lojban isn't perfect... in my study of Lojban over the last week I've run up against the expected large number of things I would have designed differently, if I were building the language. But I have decided to resist the urge to create my own Lojban-ish language for AI purposes, out of respect for the several decades of work that have gone into "tuning" Lojban to make it more usable than the original version was.

In some respects Lojban is based on similar design decisions to the knowledge representation inside my Novamente AI Engine. For instance, in both cases knowledge can be represented precisely and logically, or else it can be represented loosely and associatively, leaving precise interpretation reliant on contextual factors. In Lojban loose associations are represented by constructs called "tanru" whereas in Novamente they're represented by explicit constructs called AssociativeLinks, or by emergent associations between activity-patterns in the dynamic knowledge network.

Next, it's worth noting that Lojban was created and has been developed with a number of different goals in mind -- my own goal, easier interfacing between humans and early-stage AGI's, being just one of them.

Some Lojbanists are interested in having a "culturally neutral" language -- a goal which, while interesting, means fairly little to me.

In fact I don't really believe it's possible -- IMO Lojban is far from culturally neutral, it embodies its own culture, a nerdy and pedantic sort of culture which has plusses and minuses. There is a Lojban term "malglico" which translates roughly to "damn English" or "fucking English" -- it refers to the tendency to use Lojban in English-like ways. This is annoying to Lojban purists but really doesn't matter to me. What I care about is being able to communicate in a way that is fluid and simple and natural for me, and easy for an early-stage AI to comprehend. If the best way to achieve this is through a malglico dialect of Lojban, so be it. If malglico interferes with the comprehensibility of Lojban by AI software, however, then I'm opposed to it.

I've printed up a bunch of materials on Lojban and started studying it seriously -- if I keep up with it then in 6 months or so I'll be a decent Lojbanist. Generally I'm not much good at learning languages, but that's mostly because it bores me so much (I prefer learning things with more of a deep intrinsic structure -- languages always strike me as long lists of arbitrary decisions, and my mind wanders to more interesting things when "I" try to force it to study them...). But in this case I have a special motivation to help me overcome the boredom....

If you want to try to learn Lojban yourself, the most useful resources I've found are:


If it does happen that we teach Novamente to speak Lojban before English then in order to participate in its "AI preschool" you'll need to know Lojban! Of course once it gets beyond the preschool level it will be able to generalize from its initial language to any language. But the preschool level is my focus at the moment -- since as I'm intensely aware, we haven't gotten there yet!

I remain convinced that with 2-3 years of concentrated single-focused effort by myself and a handful of Novamente experts (which will probably only be possible if we get some pure-AI-focused funding, alas), we can create a Novamente system with the intelligence and creativity and self-understanding of a human preschooler. But I'm trying really hard to simplify every aspect of my plan in this regard, just to be sure that no unexpected time-sinks come along. One advantage of NOT having had pure-AI-focused funding for the last few years is that the AI design has been refined an awful lot during this frustrating period. The decision to take a "post-embodied" approach to linguistics -- incorporating both experiential learning and hard-wiring of linguistic knoweldge -- is not a new one; that was the plan with Webmind, back in the day. But the idea of doing initial linguistic instruction and hard-wiring for Novamente in Lojban rather than English is a new one and currently strikes me as quite a good one.

Ah -- there's a bit of a catch, but not a big one. In order to do any serious "hard-wiring" of Lojban understanding into Novamente or any other AI system, the existing computational linguistics resources for Lojban need to be beefed up a bit. I describe exactly what needs to be done here. It seems to me there's maybe 3/4 man-years of work in making pure Lojbanic resources, and another year of work in making resources to aid in automated Lojban-English translation.

And another interesting related point. While in 1998 when Mark Shoulson first pointed Lojban out to me, I thought there were no practical commercial applications for a Lojban-based AI system, I've now changed my mind. It seems to me that an AI system with a functional Lojban language comprehension module and modest level of inferential ability would actually be quite valuable in the area of knowledge management. If a group of individuals were trained in Lojban, they could enter precise knowledge into a computer system very rapidly, and this knowledge could then be reasoned on using Novamente or other tools. This knowledge base could then be queried and summarized in English -- because processing simple English queries using a system like INLINK isn't very hard, and doing crude Lojban-English translation for results reporting isn't that hard either. In any application where some institution has a LOT of knowledge to encode and several years to do it, it may actually make sense to take a Lojbanic approach rather than a more standard approach. Here you'll find an overview of this approach to knowledge management, which I call LojLink.

One example where this sort of approach to knowledge encoding could make sense is bioscience -- I've long thought that it would be good to have every PubMed abstract entered into a huge database of bio knowledge, where it could then be reasoned on and connected with online experimental biology data. But AI language comprehension tools aren't really up to this task -- all they can do now is fairly simplistic "information extraction." We plan to use a bio-customized version of INLINK to get around this problem, but entering knowledge using INLINK's interactive interface is always going to be a bit of a pain. There's enough biology out there, and the rate of increase of bio knowledge is fast enough, that it makes sense to train a crew of bio knowledge encoders in Lojban, so that the store of bio knowledge can be gotten into computer-comprehensible form at maximum rate and minimum cost. Yes, I realize this sounds really weird and would be a hard idea to sell to venture capitalists or pharma company executives -- but that doesn't mean it doesn't make sense....

As another aside, there is some Lojban poetry on the Net but I haven't found much Lojban music. I like to sing & play the keyboard sometimes (in terms of vocal style, think Bob Dylan meets Radiohead); I'm considering doing some of my future lyrics in Lojban! True, few listeners would understand what I was talking about -- but I reckon that, in many cases, the verbal contents of lyrics aren't all that important -- what's important is the genuineness of feeling attached to them, which is achievable if the words have deep meaning to the singer, whether or not the listener can understand them. Of course, I have some lyrics that violate this rule and succeed at least a bit in communicating poetically (even a bit of transhumanist lyricism here and there -- e.g. "I've got to tell you something / your lonely story made me cry / I wish we all could breathe forever / God damn the Universal Mind"). But even so I think Lojbanic lyrics could really rock....

But -- wow -- how to fit learning a new language into my schedule? Urgggh!! Way too much to do. Fortunately I have a wife who says she's willing to learn this weird language along with me, which will make things much easier; it'd be trickier to learn a language with no one to speak to. But still ... every time something new like this comes up I'm confronted with the multiplicity of Bens in my head: each with different goals and priority rankings on their shared goals ... some of them saying "Yeah! You've got to do this!", others cautioning that it will siphon away the sometimes irritatingly small amount of time currently allocated to enjoying the non-intellectual aspects of human life in the Ben-iverse....

But "I" digress. Or do I?

Perhaps internal multiplicity and the falsehood of the unified "I" is a topic best saved for another blog entry. But yet, it does tie back into Lojban -- which I notice contains a single word for "I" just like ordinary languages. This is an area where I'm tempted to introduce new Lojbanic vocabulary.

I don't know what "I" am. I like the Walt Whitman quote "I contradict myself? Very well then, I contradict myself. I am large, I contain multitudes." Indeed, I do. In From Complexity to Creativity I explored the notion of subselves extensively. This notion should be explicitly embodied in language. You should be able to say "One of my subselves wants X" rather than "I want X" -- easily, via a brief linguistic expression, rather than a complicated multi-phrasal description. The distinction between "Some of my subselves want this very intensely" and "All of my subselves want this moderately strongly" should be compactly and immediately sayable. If these things were compactly and simply expressible in language, maybe we'd get out of the habit of thinking of ourselves as unities when we're really not. At least, I'm definitely not. (Just like I feel idiotic most of the time, then feel more clever when interacting with others; similarly, when I'm on my own I often feel like a population of sub-Bens with loosely affiliated goals and desires, and then I feel more unified when interacting with others, both because others view me as a whole, and because compared to other peoples' subselves, mine all cluster together fairly tightly in spite of their differences... (and then I'm most unified of all when I let all the goals drift away and dissolve, and exist as a single non-self, basking in the 1=0, at which point humanity and transhumanity and language and all that seem no more important than un ... but now I really digress!)). And in an appropriately designed language -- say, a subself-savvy extension of Lojban -- this paragraph would be a lot shorter and simpler and sound much less silly.

And this brings up a potentially very interesting aspect of the idea of teaching AI systems in odd constructed languages. My main motivation for thinking about using Lojban instead of English to teach Novamente is to simplify the semantic mapping process. But, it's also the case that English -- like all other natural languages -- embodies a lot of really irritating illusions ... the illusion of the unified self being one of them. Lojban now also happens to embody the illusion of the unified self, but this is a lot easier to fix in Lojban than in English, because of the simpler and more flexible structure of the Lojban language. I don't buy the strongest versions of the Sapir-Whorf hypothesis (though I think everyone should read Whorf's essay-collection Language, Thought and Reality), but clearly it's true that language guides cognition to a significant extent, and this can be expected to be true of AI's at least as much as of humans.

I can envision a series of extensions to Lojban being made, with the specific objective of encouraging AI systems learn to think according to desired patterns. Avoidance of illusions regarding self is one issue among many. Two areas where Lojban definitely exceeds English are ethics and emotion. English tends to be very confused in these regards -- look at the unnecessary ambiguities of the words "happy" and "good", for example. The current Lojban vocabulary doesn't entirely overcome these problems, but it does so significantly, and could be improved in these regards with modest effort.

Well, as I type these words, my son Zeb is sitting next to me playing "Final Fantasy" (yes my work-desk sits in the livingroom next to the TV, which is mostly used by the kids for videogames, except for their obsessive viewing of South Park... the new season just started, there are new episodes now, and did you know Mr. Garrison is now Mrs. Garrison??!!). As I look over at the manly-chested (scrawny, artistic little 11-year-old Zeb's favorite utterance these days: "Admire my manly chest or go down trying!"), womanly-faced heroes run through their somewhat bleak simulated landscape, and feel really intensely sick of the repetitive background music, I can't help but observe that, obsessed as he is with that game, I'm even more obsessed with my own "final fantasy." Or am I? One of my "I"'s is. Another one feels a lot more like playing the piano for an hour or so before bed, even though clearly working on AI for that time-interval would be more productive in terms of the long-term good of Ben and the cosmos. Or is playing music a while justified by the mental peace it brings, enabling clearer thinking about AI research later? How to ensure against self-delusion in judgments like that? Ah, by 38 years old I have devised an excellent set of mental tools for guarding against delusion of one subself by itself, or of one subself by others -- and these tools are frustratingly hard to describe in the English language! No worries -- the community of sub-Bens remains reasonably harmonious, though in the manner of strange attractors rather than fixed psychological arrangements. The chaos goes on.... (ranji kalsa ... ) ... the human chaos goes on, moving inevitably toward its own self-annihilation or self-transcendence ... and my committee of sub-Bens unanimously agrees that it's worth spending a lot of thought and effort to bias the odds toward the latter ...

Tuesday, March 08, 2005

Cognitive Neuroscience of Consciousness

At any given point in time, some research areas are stagnant, some are slowly waddling along, and some are zooming ahead amazingly. Unfortunately my own main field of AI is currently in the mostly-stagnant category, though I hope my work on Novamente will change all that within the next couple years. On the other hand, molecular genetics (the domain of my work with my startup Biomind LLC) seems to be zooming ahead amazingly, and the same can be said for some aspects of neuroscience -- e.g. the subject of this post, the cognitive neuroscience of consciousness.

So, in this rather technical and academic blog entry (no details on my sex life or the psychotropic characteristics of Armenian swine toes today, sorry...), I'm going to talk about some interesting research in this field that I've been reading about lately....

Specifically, I'm going to briefly comment on a paper I just read, by a guy named Ned Block, called "Paradox and Cross-Purposes in Recent Work on Consciousness." The paper is in a book called "The Cognitive Neuroscience of Consciousness," which is a special issue of the journal COGNITION. Many of the other papers in the book are good too. This is one of two really good books I've recently read on this subject, the other being "Neural Correlates of Consciousness" edited by Thomas Metzinger (whose over-long tome Being No One, on the way the brain-mind constructs phenomenal selves, I also recommend).

One point raised repeatedly in the book is that the brain can often respond to stimuli in an unconscious and yet useful way. Stimuli that are too weak to enter consciousness can nevertheless influence behavior, via priming and other methods. For instance, if a person is shown a picture (the Muller-Lyer illusion) that is known to cause the human mind to mis-estimate line lengths, and then asked to make a motor response based on the line lengths in the picture (say, pointing to the ends of the lines) VERY QUICKLY, they will respond based on the actual line lengths without making any illusory perceptions. But if they are given a little more time to respond, then they will respond erroneously, falling prey to the illusion. The illusion happens somewhere between perception and cognition -- but this pathway is slow, and there can be super-quick loops between perception and action, which bypass cognition with all its benefits and illusions.

Block, in his paper, raises the familiar point that the concept of "consciousness" is a bit of a mess, and he decomponses it into three subconcepts:
  • phenomenality (which I've called "raw awareness")
  • accessibility (that something is accessible throughout the brain/mind, not just in one localized region)
  • reflectivity (that something can be used as content of another mental experience)
He argues that, quite possibly, in the human mind/brain

  • everything has some phenomenality ("the mind is aware of everything inside it", which to me is just a teeeeeensy step from the attractive panpsychist proposition "everything is aware")
  • but only things that undergo a particular kind of neural/mental processing become reflective, and
  • with reflectivity comes accessibility
I find this to be a pretty astute analysis of the nature of human consciousness. And I note that not all minds would have to be this way. There's no apparent reason why one couldn't have a mind in which accessibility without reflectivity was common -- though I'm not recommending architecting one. (What would the experience of such a creature be like? I've long advocated a kind of "psychological sci-fi" consisting of Joycean or Kathy-Acker-ean or Henry-Miller-oid first-person stream-of-consciousness rants by beings with completely different cognitive structures. In fact I think I had several dreams like that last week...!)

Accessibility has to do with Baars' old-but-good notion of the "global workspace" -- the idea that reflective consciousness consists of representing knowledge in some kind of "workspace" where it can be freely manipulated in a variety of ways. This workspace appears not to be localized in any particular part of the brain, but rather to be a kind of coordinated activity among many different brain regions ... perhaps, in dynamical systems terms, some kind of "attractor."

The experienced intensity of consciousness of something, Block proposes, has to do largely with the intensity of the phenomenality of the something, which may have to do with the amount of activation in the neural region where the "something" is taking place. But reflectivity requires something else besides just intensity (it requires the triggering of the global workspace attractor).

In terms of scientists' search for neural correlates of consciousness, Block reckons that what they're finding now are mainly neural correlates of intense phenomenality. For instance, when the ventral area of the brain is highly active, this seems to indicate some conscious perception is going on. But, if reflectivity is a separate and additional process to phenomenality, then finding neural correlates of the latter may not be any help in deducing the neural basis of the former.

Block's ideas fit in pretty nicely with my hypothesis (see my essay Patterns of Awareness) that the phenomenality attached to a pattern has to do with the degree to which that pattern IS a pattern in the system that it's a pattern in. In this view, locally registered things can be patterns in the brain and ergo be phenomenal to an extent; but, expansion of something into the global workspace attractor is going to make it a lot more intense as a pattern, ergo more intensely phenomenal. Ergo in the human brain intense phenomenality and reflectivity seem to go along with each other -- since both are coupled to accessibility....

All this is still pretty far from a detailed understanding of how consciousness arises in human brains. But finally, it seems to me that neuroscientists are saying the right sorts of things and asking the right sorts of questions. The reason isn't that this generation of neuroscientists is wiser than the last, but rather that modern experimental tools (e.g. fMRI and others) have led to empirical data that make it impossible either to ignore the issue of consciousness, or to continue to hold to simplistic and traditional views.

No specific brain region or brain function or neurotransmitter or whatever will be found that causes raw awareness (Block's phenomenality). But the particular aspects associated with intense human awareness -- like global cognitive accessibility and reflectivity -- will in the next few years come to be clearly associated with particular brain structures and processes. As Block proposes, these will come to be viewed as ways of modulating and enhancing (rather than causing) basic phenomenal awareness. In AI terms, it will become clear how software systems can emulate these structures and processes -- which will help guide the AI community to creating reflective and highly intelligent AI systems, without directly addressing the philosophical issue of whether AI's can really experience phenomenality (which is bogus, in my view -- of course they can; every bloody particle does; but for me, as a panpsychist, the foundational philosophy of consciousness is a pretty boring and easy topic).

I don't find these ideas have much to add to the Novamente design -- I already took Baars' global workspace notions into account in the design of Webmind, Novamente's predecessor, way back in the dark ages when Java was slow and Dubya was just a nightmare and I still ate hamburgers. But they increase the plausibility of simple mappings between Novamente and the human mind/brain -- which is, as my uncle liked to say, significantly better than a kick in the ass.

Sunday, March 06, 2005

Terrified Medieval Christs and Weird Picasso Women

Izabela and I went to the art museum in DC today, and I realized that pretty much my favorite thing in visual art is the expressions on peoples' faces. Even in the extensive galleries of medieval and renaissance pictures of Jesus, which generally make me sick after a few minutes (and aghast that so many humans could believe so many stupid things for so long -- and wondering what things almost as stupid as the whole crucifixion/resurrection drama we may currently believe --) , I found myself moved by some of the facial expressions.... Some painters (and fewer sculptors, Rodin being my favorite by far) have a remarkable way of catching exprssions that capture the essence of a person's being -- or at least, that give the illusion thereof ... they capture SOME essence, and how that essence relates to the actual human who was being painted doesn't matter much from my point of view....

The essence of some human's being -- what does that mean? The core of some human's personality.... It's different for each one of us, but still there are common patterns -- a common essence of being human. Always some pleasure and some pain. Some resignation to fate, some resolution to struggle. In the interesting faces, some deep joy, some terrible suffering. We humans are bundles of contradictions -- that's part of what makes us human.

I thought about the Singularity, of course -- about transcending what is human, and about perfecting what is human to make something that's human yet better than human. And I found myself really intuitively doubting the latter possibility. Isn't the essence of being human all bound up with contradiction and confusion, with the twisting nonstationary nonlinear superposition of pleasure and pain, of clarity and illusion, of beauty and hideousness?

Some humans are perverse by nature -- for instance, priests who condemn child molestation in their sermons while conducting it in their apartments. But even without this nasty and overt sort of self-contradiction, still, every human personality is a summation of compromises. I myself am a big teeming compromise, with desires to plunge fully into the realm of the intellect, to spend all day every day playing music, to hang out and play with my wife and kids all the time, to live in the forest with the pygmies, to meditate and vanquish/vanish the self....

Potentially with future technology we can eliminate the need for this compromise by allowing Ben to multifurcate into dozens of Bens, one living in the forest with the pygmies, one meditating all day and achieving perfect Zen enlightenment, one continually playing childrens' games and laughing, one proving mathematical theorems until his brain is 90% mathematics, one finally finishing all those half-done novels, one learning every possible musical instrument, one programming AI's, etc. etc. Each of these specialized Bens could be put in telepathic coordination with the others, so they could all have the experience, to an extent, of doing all these different things. This would be a hell of a great way to live IMO -- I'd choose it over my current existence. But it'd be foolish to call this being human. Getting rid of the compromises means getting rid of humanity.

The beauty I see in the faces portrayed by great artists is largely the beauty of how individual human personalities make their own compromises, patch together personal realities from the beauty and the terror and the love and the hate and the endless press of limitations. Getting rid of the compromises is getting rid of humanity....

Trite thoughts, I suppose.... Just another page in my internal debate about the real value of preserving humanity past the Singularity. Of course, I am committed to an ethic of choice -- I believe each sentient being should be allowed to choose to continue to exist in its present form, unless doing so would be radically dangerous to other sentient beings. Humans shouldn't be forced to transcend into uberhumans. But if they all chose to do so, would this be a bad thing? Intuitively, it seems to me that 90% of people who chose to remain human rather than to transcend would probably be doing so out of some form of perversion. And the other 10%? Out of a personality-central attachment to the particular beauty of being human, the particular varieties of compromises and limitations that make humans human ... the looks on the faces of the twisted medieval Christs and weird Picasso women....

(Of course, in spite of my appreciation for the beauty of the human, I won't be one of those choosing to turn down transcension. Though I may allow a certain percentage of my future multi-Bens to remain human ... time will tell!)

Introductory Whining and Complaining About the Difficulty of Getting Funding to Build a Real AI

A bunch of people have said to me, recently (in one version or another), "Ben, you write so much, why don't you write a blog like all the other pundits and gurus?"

My answer was that I don't have time, and I really don't -- but I decided to give it a try anyway. Last time I tried blogging was in 2002 and I kept going for a few months, then petered out. Maybe this time will have a better fate!

What's on my mind lately? Frustration, in large part. My personal life is going great -- last year my drawn-out divorce finally concluded; my kids are finally pretty much settled into their new routine and doing well again, and my new wife Izabela and I are having a great time together.

I'm enjoying doing bioinformatics research with Biomind, and recording whacky music using Sonar4 (the first time I've hooked up a sequencer to my keyboard for many years; I'd avoided it for a while due to its powerful addictive potential).

Life is good. But the problem is: the longer I think about it, the more I write about it and the more exploratory design and engineering work my Novamente colleagues and I do, the more convinced I am that I actually know how to make a thinking machine... an intelligent software program, with intelligence at the human level and beyond.

Yeah, I know, a lot of people have thought that before, and been wrong. But obviously, SOMEONE is going to be the first one to be right....

I don't pretend I have every last detail mapped out. There are plenty of little holes in my AI design, and they'll need to be filled in via an iterative, synergistic process of experimentation and theory-revision. But the overall conceptual and mathematical design is solid enough that I'm convinced the little holes can be filled in.

What's frustrating is that, though I can clearly see how to do it, I can also clearly see how much work it requires. Not a Manhattan Project scale effort. But more work than I could do in a couple years myself, even if I dropped everything else and just programmed (and even if I were a faster/better programmer like some of the young hacker-heroes on the Novamente team).
My guess is that 3 years of 100% dedicated effort by a team of 5-6 of the right people would be enough to create an AI with the intelligence of a human toddler. After that point, it's mostly a matter of teaching, along with incremental algorithm/hardware improvements that can be carefully guided based on observation of the AI mind as it learns.

And I have the right 5-6 people already, within the Novamente/Biomind orbit. But they're now spending their time on (interesting, useful) narrow-AI applications rather than on trying directly to build a thinking machine.

I thought for a while that we could create a thinking machine along the way, whilst focusing on narrow-AI applications. But it's not gonna work. Real AGI and narrow-AI may share software components, they may share learning algorithms and memory structures, but the basic work of building an AGI cognitive architecture out of these components, algorithms and structures has nothing to do with narrow AI.

As CEO of Biomind, a startup focused on analyzing biological data using some tools drawn from the Novamente AI Engine (our partially-complete, wannabe AGI system) and some other AI tools as well, I'm constantly making decisions to build Biomind software using methods that I know don't contribute much if at all toward AGI. This is because from a Biomind point of view, it's often better to have a pretty good method that runs reasonably fast and can be completed and tested relatively quickly -- rather than a better method that has more overlap with AGI technology, but takes more processor time, more RAM, and more development time.

Although our work on Biomind and other commercial apps has helped us to create a lot of tools that will be useful for building an AGI (and will continue to do so), the bottom line is that in order to create an AGI, dedicated effort will be needed. Based on the estimate I've given above (5-6 people for 3 years or so), it would seem it could be done for a million US dollars or a little less.

Not a lot of money from a big-business perspective. But a lot more than I have lying around, alas.

Some have asked why I don't just build the thing using volunteers recruited over the Net. There are two reasons.

One, this kind of project, doesn't just require programmers, it requires the right people -- with a combination of strong programming, software design, cognitive science, computer science and mathematical knowledge. This is rare enough that it's a hard combination to find even if you have money to pay for it. To find this combination among the pool of people who can afford to work a significant number of hours for free ... well the odds seem pretty low.... (Though if you have the above skills and want to work full or near-full-time on collaborating to build a thinking machine, for little or no pay, please send me an email and we'll talk!!)

Two, this is a VERY HARD project, even with a high-quality design and a great team, and I am not at all sure it can be successfully done if the team doesn't have total focus.

Well, I'm hoping the tides will turn in late 2005 or early 2006. Finally this year I'll release the long-awaited books on the Novamente design and the underlying ideas, and following that I'll attempt a serious publicity campaign to attract attention to the project. Maybe Kurzweil's release of his Singularity book in late 2005 will help, even though he's a skeptic about AGI approaches that don't involve detailed brain simulation. I'd much rather focus on actually building AGI than on doing publicity, but, y'know, "by any means necessary" etc. etc. ;-)

OK, that's enough venting for one blog entry! I promise that I won't repeat this theme over and over again, I'll give you some thematic variety.... But this theme is sure to come up again and again, as it does in my thoughts....

Very foolish of the human race to be SO CLOSE to something SO AMAZING, and yet not have the common sense to allocate resources to it instead of, for instance, the production of SpongeBob-flavored ice cream (not that I have anything against SpongeBob, he's a cute little guy...)...

P.S. Those with a taste for history may recall that in the late 1990's I did have a significant amount of funding for pure AI work, via the startup company Intelligenesis (aka Webmind), of which I was a cofounder. We tried for about 3 years and failed to create a real AI, alas. But this was not because our concepts were wrong. On the contrary, it was because we made some bad decisions regarding software engineering (too complex!), and because I was a bad manager, pursuing too many different directions at once instead of narrowly focusing efforts on the apparently best routes. The same concepts have now been shaped into a much simpler and cleaner mathematical and software design, and I've learned a lot about how to manage and focus projects. Success consists of failing over and over in appropriately different ways!