Quasiperiodic Ben-blather


Ben-blather archives

Sunday, January 28, 2001

[To sysopmind & Eliezer Yudkowsky, about Friendly AI]

hi,

> > But, the case is weaker that this is going to make AI's consistently and
> > persistently friendly.
>
> Well, yes, your version has the antianthropomorphic parts of the paper but
> only a quickie summary of how the actual Friendship system works .

OK, I'm waiting...

>
> > There are 2 main points here
> >
> > 1)
> > AI's may well end up ~indifferent~ to humans. My guess is that even if
> > initial AI's are
> > explicitly programmed to be warm & friendly to humans, eventually
> > "indifference to humans" may become an inexorable attractor...
>
> What forces create this attractor? My visualization of your visualization
> is that you're thinking in terms of an evolutionary scenario with vicious
> competition between AIs, such that all AIs have a finite lifespan before
> they are eventually killed and devoured by nearby conspecifics; the humans
> are eaten early in the game and AIs that expend energy on Friendliness
> become extinct soon after.

Not much like that, no.

More like this: Just as most humans find other humans more interesting than computers or nonhumann animals
right now (members of this list may be exceptions ;), similarly, most AI's will find other AI's more
interesting than humans. Not murder of other AI's, but success in the social network they
find most interesting (other AI's), will be a driving goal of an AI system, and humans will become
largely irrelevant to AI systems' psychologies.

> > 2)
> > There WILL be an evolutionary aspect to the growth of AI,
> because there are
> > finite computer resources and AI's can replicate themselves
> potentially infinitely.
> > So there will be a
> > "survival of the fittest" aspect to AI, meaning that AI's with greater
> > initiative, motivation, etc. will be more likely to survive.
>
> You need two things for evolution: first, replication; second, imperfect
> replication. It's not clear that a human-equivalent Friendly AI would
> wish to replicate verself at all - how does this goal subserve
> Friendliness? And if the Friendly AI does replicate verself, why would
> the Friendship system be permitted to change in offspring? Why would
> cutthroat competition be permitted to emerge? Either of these outcomes,
> if predictable, would seem to rule out replication as necessarily
> unFriendly, unless these problems can be overcome.

First of all, evolution among AI's might not exactly mimic evolution among humans. There may be
many differences.

Among AI's there's another option besides replication: expansion of one mind to assume
all available processing resources. In expanding itself in this way, a mind necessarily changes
into something different.

Many of the world's AI's are probably going to be resource-hungry -- to want to consume more and more processing resources. So there will be some competition.

This is obvious in the case where different AI's serve different commercial interests, and hence have
competing goal sets carried over from the world of human competition.

But it also will occur in the absence of spillover from the human-competition domain. If several different
AI's share a common goal of creating the most possible knowledge, but each of them has a different intuition
about how to achieve this goal -- then the AI's will rationally compete for resources, without
any necessary enmity between them.

The possible source of an urge for imperfect replication in AI's is also clear. It will come
directly from the urge for self-improvement.
"Perhaps," thinks AI #74, "if I changed myself in this way then I'd be a little smarter and achieve my goals better.
But I don't want ot make this change permanently -- I might fuck myself up. I've tried to rationally assess
the consequences of the change, but they're hard to predict in detail. So I'll just try it -- I'll create a clone
of myself with this particular modification and see what happens." Hmm.... another way to use up resources.
Imperfect replication as a highly effective learning strategy...

In none of these aspects am I talking about "Nature, red in tooth and claw." You do a great job of arguing that
the aggressive, obsessive, jealous, overemotional aspects of human nature won't be present in AI's, unless foolish people make a special effort to implant them there.

I'm talking about AI's that are hungry to achieve their own goals according to their own intuitions, that want
to achieve as many resources as possible to do so, and that as a consequence may have "friendliness to humans"
as number 5,347 on their priority list.

This, I guess, is one of the oddest things about the digital minds in "Diaspora". After all those centuries, it's
still optimal to have computer memory partitioned off into minds roughly the size of an individual human mind?
How come entities with the memory & brain-power of 50,000 humans weren't experimented with, and didn't become
dominant? In that book, there is so much experimentation in physics, and so little experimentation in artificial,
radically non-human digital psychology...

So, suppose that Friendliness to humans is one of the goals of an AI system, probabilistically weighted along
with all the other goals. Then, my guess is that as AI's become more concerned with their own social networks
and their goals of creating knowledge and learning new things, the weight of the Friendliness goal is going to
gradually drift down. Not that a "kill humans" goal will emerge, just that humans will gradually become less &
less relevant to their world-view...

>
> > Points 1 and 2 tie in together. Because all my experimentation
> with genetic
> > algorithms shows that,
> > for evolutionary processes, initial conditions are fairly
> irrelevant. The
> > system evolves fit things that
> > live in large basins of attraction, no matter where you start them. If
> > 'warm & friendly to humans' has a smaller basin
> > of attraction than 'indifferent to humans', then randomness plus genetic
> > drift is going to lead the latter
> > to dominate before long regardless of initial condition.
>
> I guess you'd better figure out how to use directed evolution and
> externally imposed selection pressures to manipulate the fitness metric
> and the basins of attraction, so that the first AIs capable of replication
> without human assistance are Friendly enough to want to deliberately
> ensure Friendliness in their offspring.

I strongly suspect that the first AI's capable of replication without human assistance
will have the property you describe.

But I sort of doubt that this will still be true of the 99'th generation after that...
posted by Ben Goertzel 7:00 AM

[To sysopmind & Eliezer Yudkowsky, about Friendly AI]

You make a very good case that due to

-- AI's not evolving in a predator-prey situation

-- AI's not having to fight for mates

-- AI's being able to remove from their own brains, things that they find
objectionable

-- AI's being able to introspect, and understand the roots and dynamics of their behaviors,
more thoroughly than humans

and other related facts, AI's are probably going to be vastly mentally healthier than humans,
without our strong inclinations toward aggression, jealousy, and so forth.

But, the case is weaker that this is going to make AI's consistently and persistently friendly.

There are 2 main points here

1)
AI's may well end up ~indifferent~ to humans. My guess is that even if initial AI's are
explicitly programmed to be warm & friendly to humans, eventually "indifference to humans" may become
an inexorable attractor...

2)
There WILL be an evolutionary aspect to the growth of AI, because there are finite
computer resources and AI's can replicate themselves potentially infinitely. So there will be a
"survival of the fittest" aspect to AI, meaning that AI's with greater initiative, motivation, etc.
will be more likely to survive.

At least, though, an AI will only need to retain those traits that are needed for CURRENT survival;
unlike we humans, who are saddled with all kinds of traits that were useful for survival in some long-past
situation. This will remain their big advantage, as you point out, in slightly different language.

Points 1 and 2 tie in together. Because all my experimentation with genetic algorithms shows that,
for evolutionary processes, initial conditions are fairly irrelevant. The system evolves fit things that
live in large basins of attraction, no matter where you start them. If 'warm & friendly to humans' has a smaller basin
of attraction than 'indifferent to humans', then randomness plus genetic drift is going to lead the latter
to dominate before long regardless of initial condition.

-- Ben

posted by Ben Goertzel 6:59 AM

Wednesday, January 24, 2001

[From sysopmind...]

> Do you believe your AI may loose interest in humans?

Well, sure. Potentially. I sometimes lose interest in humans.
And I sometimes lose interest in computers.

> Sorry to keep throwing buckets of cold water over everyone, but if you're
> worth your salt, you're really talking about the end of the human
> race here.

Not necessarily, no. The advent of humans was not the end of nonhuman primates,
nor was their advent the end of "lower" mammals.

> The reason I'm questioning is not because I'm being antagonistic
> or that I'm
> particularly interested in the answers per se, but more the
> reasoning that's
> gone on behind them.

I have nothing against humans, there are very many aspects of human-ness that I love

But the things I value most deeply in humanity -- intelligence, love, compassion, creativity --
aren't specifically tied to our human form, are they? They may well -- and I believe they will --
continue and blossom in post-human life-forms.

Frankly, I think AI's can make human life happier than it is now. I accept that they also have
the potential to make it worse, or to cause it to dwindle and be replaced by something generally
accepted as "better." I don't see human-ness itself as a virtue....
posted by Ben Goertzel 8:13 PM

Tuesday, January 16, 2001

[FROM A DIALOGUE ON THE PROGSTONE DISCUSSION GROUP (not to be confused with Progesterone...)]

My position is:

a)
We are configurations of particles, each of which forms its own subjective view of the world; social groups
are configurations of particles, each of which forms its own subjective view of the world... so are chairs
and tables and muskrats...

b)
The "particles" that make us up are themselves projections (nonlinear superpositions) of the various subjective
views of the world owned by the particle systems that they produce

Modern neuroscience and psychology help us out with a)

As for b), science isn't there yet. But the next version of physics will get us there.

-- Ben

> -----Original Message-----
> From: Charles Kendrick [mailto:charles@althem.com]
> Sent: Monday, January 15, 2001 11:39 PM
> To: progstone@egroups.com
> Subject: Re: [progstone] Re: what is reality?
>
>
> What happens during wide-scale transformation of belief? eg as we
> explore our world, we find out that the earth revolves around the sun,
> not vice-versa. In the shared hallucination, did the sun previously
> revolve around the earth? Everyone supposedly thought so.
>
> If the sun did revolve around the earth, how was the entire content of
> the hallucination transformed such that it's self-consistent? Or did
> our sense of logic warp with it? What agent performed the ludicrously
> complex transformation of all of our memories of data inconsistent with
> the new world view?
>
> Or would your position be that the shared hallucination has included the
> earth as revolving around the sun for a very long time, but most people
> (all people?) weren't consciously aware of it? If that is the case, how
> far out ahead of our exploration does the hallucination extend? Did the
> mechanism of quantum mechanics exist before agriculture? If all the
> mechanism of the universe existed before people were aware of it, isn't
> that just objective reality all over again?
>
> PS. I bet you could have gotten Karl Popper to take a swing at you.
>
> --
> Charles Kendrick
>
> Ben Goertzel wrote:
> >
> > Is an electron?
> >
> > Is a quark?
> >
> > Is an hallucination?
> >
> > Is a collective hallucination? (as I experienced years ago under the
> > influence of psychedelics)
> >

> > > > Descartes: "That which is, is. That which isn't, isn't"
> > >
> > > Indeed. In light of what else he said, think on what here he
> didn't say.
> > > What... who, did he not mention?
> > > >
> > > > _I_ for one appreciate the meaning behind it, and don't think
> > > > he was just spouting trite nothings...
> > >
> > > Absolutely.
> > >
> > > > :)
> > > >
> > > > Matt
> > > >
> > > >
> > >
> > > --
> > > Howard, the Grum
> > >
> > >
> > >
>
>
>
posted by Ben Goertzel 4:52 AM


This page is powered by Blogger. Isn't yours?