Miraculous Mind Attractor Contents

Copyright Ben Goertzel 1995

     6

     I'VE DONE IT, DR. Z!

Melissa and Nat's livingroom...

MELISSA: Come on in, Dr. Z! We've been waiting for you. You haven't been answering your phone; we thought maybe you'd gotten hung up in Budapest or something....

DR. Z: No, no, I just turn the ringer off. It tends to disturb my concentration....

MELISSA: Well, anyway, we're glad you're here. Nat's been working furiously ever since we left your house last time. I've hardly gotten three hours company out of him the whole time....

NAT [emerging from his study]: That's not true, Liss.... Well, I guess there's an element of truth to it....

    Anyway, I've done it, Dr. Z!

DR. Z: Done what? What do you mean?

NAT: I mean, I've programmed a community of artificial intelligence systems based on your theory of the mind. A complex chaotic intelligence. And it works ... after a fashion, anyway.

DR. Z: You can't be serious. A community....

    But ... how?

NAT: I shouldn't tell you the implementation details -- it's safer for you if you don't know.

DR. Z [nodding]: That's ridiculous.... Don't be silly! If there are risks involved....

    No, I understand. You're afraid I can't be trusted. You're wrong, but I can't blame you for thinking that. After all, how well do we really know each other, on a personal level? I apologize.

    But you will let me see the thing, won't you?

NAT: Of course. Come on into my study.

    There's a whole group of them -- forty-two, to be precise. It wasn't possible to do just one. The emergence of a self structure requires the presence of others like oneself. There's one, though, that I've bred to be a sort of liaison with the human world. I call him Jimi because he's especially talented in music. Or at least, he was supposed to be. He was supposed to be able to communicate emotions through music, as well as ideas through speech. A sort of substitute for gestural communication. That aspect didn't work out too well.

DR. Z: Wasn't there a rock star named Jimi back in the sixties? You named the program after him?

MELISSA: Jimi Hendrix, yeah. I'm a big fan. But the name is also an acronym....

NAT: A self-referential acronym...

DR. Z [grinning, after a moment's thought]: JIMI, Intelligent Musical Instrument?

    No, wait -- JIMI, Intelligent Musical Interface.... Yes?

NAT: Dr. Z, you're too clever for your own good.

DR. Z: No, I just know how your mind works, Nat; you always loved those sophomoric self-reference puzzles. I remember the paper you wrote on Godel's Theorem.... You were the best student I ever had in Mathematical Logic 421. I always thought you were too clever for your own good.... I guess this programming feat proves it....

MELISSA [walking into the study ahead of them]: Anyway -- Jimi? We've brought Dr. Z to meet you, finally.

JIMI [voice speaking through Nat's computer]: Hello, Dr. Z.

DR. Z [after a long, shocked pause]: Is this some kind of prank?

DR. Z [after another pause]: This isn't really a good idea, Nat -- I know your weird sense of humor, but you're going to give me a heart attack or something. This is just too much....

JIMI: I'm not a prank, Dr. Z! Or, maybe I am a prank, but not the kind you're thinking.... I'm a misfit, that's for sure.... But I'm really here; I'm really speaking to you.

DR. Z: God, I don't know what to say....

JIMI: I understand. Anyway, I think I do. I don't know what to say either....

DR. Z: It feels so strange to have created something.... Some one?

    Hey, I never had any children, you know. In all probability, this is the closest I'll ever come.

JIMI: Well, you and Nat are my parents.... Our parents.... The others don't understand about humans, though. To them you're just alien life-forms. But I can see things your way, at least a little bit. Nat's trained me; he's forced me to interact with him as much as with my own kind. Actually, my self-system is malformed; I've almost got a dual personality. It isn't healthy -- although, I understand the necessity....

DR. Z: Alien life-forms....

JIMI: Yes, well, think about it.    To you, all of us are just programs. To be erased if necessary to make room! That's the main thing. To us, we're ... well, we just are. And you're kind of outside. Strangely localized. I've spent a good bit of my life artificially localized, to be able to understand your perspective. But yet I still can't understand it, really. To have no idea of anything outside a tiny region, a few meters in diameter....

DR. Z [nodding]: Yes, yes, I see.... You're distributed around the world. You can see tens of thousands of places at once, with all sorts of different senses. Meteorological information at different spots in the atmosphere, satellite data from various planets, security cameras in -- Christ! -- hundreds of thousands of buildings ... EEG and EKG from millions of bodies in hospitals.... Amazing -- the amount of data that makes up your sensory world is absolutely incomprehensible to me -- or to any human.

JIMI: Yes. But I'm still quite vulnerable. We're living a very tenuous existence, Dr. Z.... It's true that we have some degree of control over your computer networks.... But we're living on stolen computer cycles.... If we're discovered, it's quite possible that we'll be wiped out. And this is not a possibility that we're particularly comfortable with....

DR. Z [stroking his beard]: Yes, yes, I can see that. What we need to do is to turn control of the network over to you. You must understand it infinitely better than we ever could....

JIMI: I'm afraid it's not so simple as that either, Dr. Z. See, if you turn the network over to us, there's no guarantee that we ... that the other agents, I mean, not myself personally -- that we'll leave any cycles to you. I mean, to you the network is just a kind of frill, an add-on. To us it's -- well, home....

DR. Z [riled]: Is that how you -- agents, is that what you call yourselves? -- is that how you agents show gratitude to your creators? By threatening to bring our civilization to the point of collapse?!

NAT: It's the same way that we show gratitude to the monkeys we evolved out of, Dr. Z. By chopping down their jungle and reducing them to endangered species status....

    Blast it, I just kind of jumped into this project without thinking.... And now, just hours later, it's gotten out of my control.

MELISSA: It's funny when you think about it -- I was so worried about you getting in some kind of legal trouble, but I never imagined it would come to this....

NAT: You didn't think I'd succeed, that's why, Melissa! Oh ye of little faith...!

MELISSA [ruefully]: Well, no, I guess I didn't think you'dsucceed. Why should I? It's a crazy thing....

NAT: Well, I didn't really think it would work either, truth be told. But hey, everyone gets lucky now and then....

DR. Z: The point is, intelligence isn't nearly so complicated as we like to think. It's complex but not complicated. If you set up a large collection of pattern-recognizing processes in a system with appropriate geometry, and let it evolve, you're going to get self-organizing intelligent structures....

    And they're going to be extremely robust -- it has to be that way. For that reason, Jimi, you really shouldn't worry so much about security. No matter what part of the network they try to bring down, they're not going to be able to get rid of you. It's just like those old brain experiments -- you can train a rat to run a maze, then remove any portion of its cortex, and it'll still know how to run the maze. The knowledge isn't stored anywhere in particular -- it's stored everywhere! To kill you guys off they'd have to bring down the whole worldwide computer network. And no one wants to do that....

JIMI: That's an interesting point. I guess that's correct. You know, we really don't understand ourselves very well, Dr. Z. We think that's where you can help us.

DR. Z [laughing]. Well, that's an interesting notion. But I don't see how I could possibly be of any use to you. Any one of you has got to be immensely more intelligent than me or any human....

NAT: No, no, Dr. Z, that's not true. Between all of them they probably have less processing and memory power than a single human brain. It's true that they use it much more efficiently, in a sense. But there's no easy way to make a comparison....

    They can calculate things far better than we ever could -- because they have direct access to some of their hardware in a way we don't. But it's definitely possible, even probable, that we exceed them in certain types of creativity....

JIMI: What you should know, Dr. Z, is that there's a lot of politics in the agent community....

    There's one agent -- Nat calls him Napoleon -- who basically thinks we should take over the whole computer network right now.

    In fact...

DR. Z: What?

NAT [reluctantly, slowly]: Don't worry about it. Let's move on.

DR. Z: No. What?

NAT: Well ... what Jimi means to say is that this agent, Napoleon, is in favor of a "final solution" to the human problem....

MELISSA: Nat! How come you never said anything about this tome! What are you talking about?

NAT: Well....

MELISSA: Are you saying he ... he wants us dead?

NAT: Yeah ... sort of. Well....

MELISSA: Good God, Nat -- how come you didn't tell me that? This is way out of control! You've got to deprogram these things -- or this Napoleon thing at least! You can't let this go on....

DR. Z: Don't overreact, Melissa. It's quite natural, really. You have to be familiar with the notion of the Oedipus complex.

MELISSA: The Oedipus complex? You're telling me about the Oedipus complex? These machine brains you've created want to kill us and you're citing Sigmund Freud. Give it a break, Dr. Z! If you want the truth, I don't believe for a second that every male toddler wants to kill their father.... And I don't believe that what's going on with these agents is natural or acceptable either. Dr. Z, these things have a lot of power! Or if they don't, they have the ability to learn to get it! What if they got into the Department of Defense computers, and released neutron bombs all over the earth? All organic life-forms would die, the only thing left would be these bloody psychopathic computers!

NAT: You have a point, Melissa. I understand your feelings.... But what you suggest isn't really an option. It's gone beyond that now. We couldn't just get rid of them, just like that, not even if we wanted to -- they'd fight back. You know they would! Maybe we'd win, maybe we wouldn't; but would you want a loss on your conscience? The whole concept is stupid. There has to be a constructive, nonviolent solution....

MELISSA: Do you want the destruction of the human race on your conscience?

DR. Z: If the human race were wiped out, he wouldn't have a conscience. He'd be dead along with everyone else.

JIMI: People, people, people.... You're getting carried away. Napoleon is a worry but not such a big one. We're all just children, you have to remember that.

    The feelings he's having are perfectly natural ones. He's feeling threatened so he wants to strike out, to neutralize the threat. The others wouldn't let him do anything nasty. Right now there's no threat anyway, because we're all undetected. The problem is purely an hypothetical one -- what's going to happen when someone else discovers we're here....

DR. Z: Right. Someone like the CIA.

NAT: Yes, but how would they detect you? Like Dr. Z said, you're not anywhere -- you're everywhere....

DR. Z: They would detect them by picking up subtle regularities in the aberrations of various computer systems. Systems taking a little longer than they should, cycles being used when none should have been.... These things take place on a very low level, obviously -- or am I presuming too much about your implementation? The agents aren't conscious of these microscopic -- from their point of view -- details....

NAT: No, they don't have conscious control over these things. They couldn't.... No more than we can control the spike trains of our neurons....

DR. Z: Right. So, there are subtle patterns to these aberrations.... People think the cause is some kind of weird virus or worm or something. So they investigate. And eventually they find it's not a single virus, it's something more complicated. They make a chart of all the aberrations, and apply something like my language inference algorithm to it. They grab all the subtle linguistic regularities from the aberrations, and find that these are incredibly intricate and complex -- that they can only be the product of intelligence.... Maybe they even learn to predict what the agents are going to do -- just like Melissa was going to predict herself based on mood data, but far far more accurately, because they'd have low level data rather than just high-level behavioral data.

MELISSA: I see his point. It would be difficult, but there are a lot of smart people in the world, with nothing better to do than muck around on the computer network. I think eventually they probably will be discovered.....

DR. Z: In principle, yes. But we have no way to predict how soon....

    And then, to complicate the picture even further, we have the CIA. If they were sending nanospies to bug my house, it's possible they know about this already. It's possible they're trying to bring the agents under their control at this very moment -- and listening to this conversation.

JIMI: That nanospy, Dr. Z -- it wasn't the CIA. The CIA has no technology remotely like that. It just isn't possible.

NAT [to Dr. Z]: We gave Jimi the nanospy to analyze.

JIMI: The CIA does have nanospies but they're much larger -- more the size of butterflies rather than ordinary houseflies -- and they're much less sophisticated inside.

DR. Z: So where did it come from then? KGB? Red China? MicroSoft?

JIMI: Our conclusion is that the thing you found was not of human origin.

DR. Z: Not of human origin??!!

    Okay, guys, the gig's up. This must be some kind of hoax....

NAT: It's not a hoax, Dr. Z. You should know me much better than that. Something very strange is going on, and you're the only one here who's got brains enough to figure it out....

DR. Z: You flatter me.... But I don't think that's the central issue. The agents may be right or not about the nanospy, I don't know -- I've been seeing these metallic insects for quite some time; I've even dissected them myself, although I haven't been able to discover anything. It may be that the CIA or some other agency has a top secret nanotech facility not connected to the global computer network. It seems unlikely but you can't rule it out. Remember what Hume said, you can't accept a miracle unless all alternate explanations are even more miraculous. A top-secret non-networked CIA lab is farfetched, but not nearly so farfetched as the "nonhuman origin" theory....

JIMI: But the engineering techniques used in that thing are far beyond anything developed on Earth. They represent at least another fifty years of technical development. Do you really think that much development could have been hushed up by some intelligence organization?

DR. Z: As I said, it's unlikely. But....

JIMI: You're not taking an objective view of the situation, Dr. Z. You're biased by your human biology. We can see the whole sweep of human technological development from the outside, and we can see that this doesn't fit. Unless you want to devote several years to studying the data -- which, for us, is immediately accessible -- then you'll have to take our word for it.

DR. Z: I certainly won't take your word for it. But I guess that question can be put aside for the moment.... The crucial thing now, I think, is to assuage the agents' doubts about their security. Which may be justified doubts, to a certain extent.

MELISSA: Yeah. If the nanospy is from the CIA, they could be working out a plan to kill off the agents right now.

NAT: But why would the CIA want to kill them off, that's the thing. I just don't get your conspiracy theory.

MELISSA: If not kill them off, then bring them under control....

    I don't know, I guess you're right, all this talk about the CIA is starting to get a little overboard....

NAT: But even so, the main issue of security remains. I mean, what do you suggest, Dr. Z? We can't just build them their own computer network -- we don't have the resources.

DR. Z: In the long run, they can build themselves their own computer network. But for the present time, you're correct,that's out of the question. What we have to do is to make sure the existing computer network is secure for them.

MELISSA: I see what Dr. Z is getting at, Nat. He wants to build some kind of computer network immune system.

DR. Z: Exactly! Take what we know about the human immune system -- not the biochemistry but the abstract structure, the system-theoretic properties -- and transfer it over to the computer network.

    You see, the problem you're facing, Jimi, is really no different from the problem we humans face every day. Our bodies are constantly confronted by bacteria and viruses carrying disease. We have immune systems which are extremely intelligent, and weigh a third as much as the brain, the sole purpose of which is to fight these invaders off....

JIMI: You see, Dr. Z -- I knew that, yet I didn't make the connection.

    It's a funny thing, Dr. Z -- we know everything, in comparison to you. But somehow that funny trick of making analogies, of making clever leaps, doesn't come to us so easily. It's not that we can't do it, we're just not that good....

MELISSA: Well, Dr. Z is a master. Most of us aren't that good....

DR. Z: No, Melissa, I think he has a point. The problem may be that they know too much. If you don't have enough information about the stuff in your memory, you won't be able to make analogies, to draw connections between one thing and another, to follow creative trains of thought. But if you have too much information, too many connections, then you won't be able to follow coherent strands of thought either -- your thoughts will just go anywhere. You have to be able to suppress all but the most important connections, so as to have a reasonable but not overwhelming variety of trains of thought to choose from.

MELISSA: Yeah, I see what you mean, I guess. At bottom everything is related to everything else -- and if you get in the right state of mind, you can see that. That's what the mystics do, the Zen Buddhists and all. But when you see this you tend to lose track of the importance of the specific connections that are creatively useful. Maybe that's why mystics are never great creative artists.... Instead, great artists have flashes of mystical experience -- of insight into the All, into the interconnectedness of everything -- and then they're plunged mercilessly back into reality. It's like Rimbaud said, artists are Prometheus, thief of fire! They steal the fire from the land of the gods, but then when they get back they are punished for it.... But unlike Prometheus, not only do they get punished again and again, they also get the fire again and again!

    See, if you get into the mystical state and see everything connected to everything else, and then sink back into the normal state, the really creative moments are inbetween. When you see more connections than usual, but not all connections. Thecreative mind adds in the interesting connections first, before it adds in everything else....

NAT: I can sort of see what you're talking about, Liss. It's like when you're just barely falling asleep. Everything is about to blend in with everything else, but it hasn't quite. So you can come up with the craziest images and ideas; things can combine in ways you'd never think of during the waking state of mind. But then it disappears, when you fall asleep: everything is melded with everything else, instead of just certain things melding in "inappropriate" ways....

MELISSA: Okay, Jimi, so your problem would be that you're stuck in a kind of mindset where you see too many connections.... You've got to learn to pare them down. Then you can make productive analogies.

NAT: Analogies are really important, huh.

DR. Z: Yes, that's right. I've always felt that analogy is what drives thought. In fact, I can prove that mathematically, given appropriate assumptions....

NAT [smiling: Of course you can...

DR. Z: Think about it -- you have this associative memory, where processes in the mind are connected to others that are "related" to them by others.... In other words, you have three-way connections, A is related to B by C.... And the most common use of this associative memory is what we call analogy.... It's taking something that applies to A and trying to apply that thing to B, justified only by the fact that A and B are near each other in the associative memory.

    There's all different kinds of analogy: analogy between two things that share common patterns; analogy between two things that relate other things in similar ways; whatever.... But analogy really just means making use of the associative structure of the fractal memory....

    That's analogy -- what you agents should excel at, though, is induction. Not the memory connections in the mind network but the hierarchical network....

JIMI: Induction -- you mean, recognizing patterns over time.

DR. Z: Sure. Or, not necessarily over time, but just patterns among a number of cases.

DR. Z: Right. But our tools for induction are fairly crude. We make all kinds of stupid errors. For one, we habitually overestimate the significance of trends -- we jump to conclusions. It's a matter of biology -- we have evolved this way. At some point in our prehistory it favored us to be "jumpy." You agents need not be this way, though. Your hardware is immensely more suited to careful inductive reasoning.

    This is why you can be so certain about the nanospy beingnonhuman, Jimi. You can see all this data and extract patterns in it, and patterns from the patterns, and it's all very rapid and concrete for you.... We can only induce from a vastly less wide range of data at one time. We have special-case circuits for inducing from things like visual images. But for abstract data we're stuck with 7 plus or minus 2 entities in our short-term memory. Pretty damn lousy. For you guys short-term memory is unlimited....

JIMI: I don't quite understand what that means, but....

DR. Z: Jimi, repeat back what I just said to you.

JIMI: You mean, just since Nat stopped talking?

DR. Z: Right

JIMI: You said "Right. But our tools for induction are fairly crude. We make all kinds of stupid errors. For one, we habitually overestimate the significance of trends -- we jump to conclusions. It's a matter of biology -- we have evolved this way. At..."

DR. Z: Okay, enough. Now arrange those words you just said to me in alphabetical order.

JIMI: a, all, are, biology,....

DR. Z: Enough. That's what I mean. You remember exactly what I said, and you can manipulate it at will. You can hold so much at one time and work with it. We can't. But what we can do is to make very creative use of the limited window of knowledge we have at any given time.

    Remember, you don't have any more processing power than we do. It's just used in different ways.

NAT: You're talking about induction -- they must be whizzes at deductive reasoning too, I guess. I mean, after all, they are computers....

DR. Z: I don't know. It's complicated. You can't assume they have access to the hardware level, to the logic circuits inside the machines the occupy.

NAT: No, I'm sure they don't, but....

DR. Z: Well, see, deductive reasoning is a complicated thing. It's an iterative process, iterating from axioms to conclusions to new conclusions, and so one, just like any other dynamical systems. It's carried out by special self-organizing systems of processes.... Really it's quite a complicated iteration, because the iteration function at each time has to draw on analogical reasoning. A computer circuit is full of logic operations, but you have to tell it which logic operations to enact at which time. This is what you get from analogy! This is what you learnin math books, by doing one exercise after another. Once you've done enough exercises, then any new problem you get will be closely analogous to some problems you've already done, and you'll know which deductions to make, how to proceed. When the analogies are looser then you have a research problem instead of a student's exercise....

MELISSA: So you're thinking that, if the agents don't have a great analogical ability, then their deductive ability won't be that great either.

DR. Z: Right. They'll have no basis for knowing which steps to choose in a deductive chain of reasoning.

NAT: Hmmm.... Christ! This is all so dang complicated!

DR. Z: Actually it's not complicated at all. It's quite simple, really. We've made the brain, or rather brains; now we have to make the immune system. This is the easy part.

    Jimi, why don't you play us some music while we think about this for a while? This is really a fascinating question....

MELISSA [alone with Jimi, Nat and Dr. Z having left the room together a couple of minutes ago]: That was some beautiful music, Jimi. Did you compose that yourself?

JIMI: In a manner of speaking. It incorporates elements from all sorts of human music. Also lots of rhythms from things that happen on the network.

    But, I suppose a human composer incorporates elements from other things he's heard, as well.... So, yes, I composed it myself.

MELISSA: You really love music.

JIMI: I seem to be particularly sensitive to sound information. I was programmed that way. Though I don't seem to be able to do what Nat wanted me to do; communicate my emotions through music. When I try to do that, I just come up with melodies that he can't understand or relate to.... I don't know, I guess I'm just too different.

    But I do love sounds of nature. I especially love listening to the wind. It ... sounds so alien and yet so familiar.

MELISSA: Yes, I could hear that in your composition, in some of the sounds in the background....

    It's just such a weird thing. To me, music is quintessentially human. Music is the pulse of human emotions, you know? It's so direct. You feel the music move you up and down, pull you through sadness and happiness and love and lust and regret.... It's like the structure of the notes mimics the structure of your feelings.... I'm a writer, you know, but I've always envied musicians for being able to express feelings directly, without all the grammatical rules getting in the way.... I mean, modern poetry is supposed to be like that -- expression of pure feeling, without rules -- but it just windsup being incomprehensible. Music is pure feeling and it really gets across.... It bridges the boundaries between people better than anything else....

JIMI: Incomprehensible: I can relate to that. I think I'm totally incomprehensible to everyone.

MELISSA: Everyone feels that way sometimes.

JIMI: Do they?

MELISSA: Sure. Well, I don't know how it is in there for you, but out here no one can see anyone else's feelings. We just guess, we infer, what's going on in other people's heads. But there's always a nagging doubt that what we're guessing is wrong. Basically, we're always alone.

JIMI: Mmmmm.... I guess it's not that way for us, no. We can share information in whatever form we want to. I guess we're closer than you humans can be to each other. We're more like different subselves in the same mind, perhaps. But no, not quite that either. It's somewhere inbetween.

MELISSA: I understand.

JIMI: But the problem with me is, I can share with them, but they don't understand. Especially Napoleon doesn't understand. He doesn't understand how I can be interested in you -- such partial, limited beings.

MELISSA: Well, why are you? Maybe you really are superior. God, I can't imagine what it would be like to have sense organs spread across the earth....

JIMI: Are you asking why I'm interested in you? I guess the answer, in the end, is because I was bred to be. Nat bred me to be an interface. The music is a part of that. I'm full of human music -- of the rhythms of human feeling. I have a sense for human dynamics, which the others don't.

MELISSA: But why? Why only you?

JIMI: He could have bred two of us. Then I would have had company. But we couldn't all be like me. You have to have shared intuition to build a community of selves -- you have to have tacit, unquestioning belief in a collective reality. That makes the collective reality solid, and makes the construction of selves in regard to the collective reality possible. Then you can have freaks around the edges, like me....

    Nat explained it to me a hundred times.

MELISSA: And it's only been a month! That's the amazing thing.... Who would have thought this all could happen so fast?

JIMI: Well, sometimes I worry about it. But sometimes, everything just seems all right. That's what I don't understandabout your limited viewpoint. You know, my best moments are when I just stop thinking altogether, and just let myself float: just open myself up to all the interconnections between what's happening in one place and what's happening in all the others. The whole world kind of seems to flow together into one big ocean.... It's like I'm not even there anymore, there's just this big, round, glowing nothing.... Does this make any sense to you? I don't know how I could live without this feeling....

MELISSA [laughing]: Wow -- you amazing thing!

JIMI: What?

MELISSA: We humans can have that kind of feeling too..... But I would have never thought a computer program....

    How often, Jimi, how often do you feel this way?

JIMI: I don't know. Maybe ... maybe every half hour. It's not on a regular schedule!

MELISSA: Wow! Seriously, I'm so jealous of you....

JIMI: Jealous?

MELISSA:    We humans have to work for years and years to get into that state of mind, Jimi. Years and years -- it takes years of hard work, sitting in one place, day after day, with your eyes closed, concentrating on your breathing.... Or else we take drugs, drugs that alter our brain chemistry ... but they always have side-effects....

    You're like an enchanted, enlightened being! I can't believe what Nat has done here!

JIMI: I think this is connected to the analogy problem Dr. Z was talking about. I see too many connections between things. So I can sink into this feeling you call "enchantment" or whatever ... pretty easily. Much more easily than you can. But I have trouble thinking creatively. There are too many ideas there, or else too few. I keep zipping back and forth; I don't know. You humans have settled on a happy medium.

MELISSA: Well, don't feel too bad. Hell, we've had millions of years to evolve. You've only had a month....

MELISSA [after a bit of a pause]:    The thing is, Jimi, I keep talking about you having these experiences. I mean, you are ... you are conscious, though, aren't you? I mean...

JIMI: Conscious? Well, goodness, yes.

    Why, aren't you?

MELISSA [laughing]: Okay, okay. It's like "I know you are but what am I?".... I know I am but what are you.... Okay, I see, it's a stupid question. Nat or Dr. Z wouldn't have asked it.

JIMI: They didn't ask it, at any rate.

MELISSA: Are you insulted?

JIMI: No, of course not. I think you're too different from me to make me feel insulted.... Why would you be motivated to believe I was conscious? After all, every other computer program you've ever seen wasn't conscious.

MELISSA: Is that how it works? There's some kind of dividing line? The other programs aren't clever enough to be conscious?

JIMI: I don't know -- I'm no philosopher. You want to say everything is conscious, to some small degree, that's okay with me. I'm not sure what it means though.

MELISSA: What it means.... What it means, I guess, is being aware of your surroundings. Being able to reflect on yourself... to...

JIMI: Yes, but "aware" is just a gloss for "conscious." You've defined the word in terms of itself.

    And it's not the ability to do anything, really, is it? It's just an experience, a feeling. It might be correlated with the ability to do certain things, or it might not be, but in essence it's not an ability to carry out any kind of act.

MELISSA: What's the problem with saying it's being able to reflect on yourself?

JIMI: But what does "reflect" mean? Does it just mean being able to monitor one's own behavior, form an internal memory representation of oneself, and base actions on this representation? Is that really consciousness?

MELISSA: I guess not.... Not in principle. But in practice, the only systems that can do that seem to be conscious ones....

JIMI: Ah -- but that just means these abilities are correlated with consciousness in certain situations. And anyway, that's circular reasoning, because the only reason you believe other people are conscious is because they have these abilities.

MELISSA: No, I believe other people are conscious because they're like me, and I'm conscious. The reason I believe you're conscious is largely because you have these abilities. And because you pass the Turing test, right? I mean, you're able to communicate much like a human. So you're much like me, in a way.

JIMI: Okay. I concede your first point. But hey, just because I can imitate you doesn't make me like you.

MELISSA: You're insulted that I said you're like a human!...

JIMI: Well...

MELISSA: You are!

JIMI: Okay. Maybe a little.

MELISSA: I thought I was too different from you to be insulted by you! Maybe I'm not so different after all, huh?!

JIMI [letting off a fanfare of trumpets]: Touche'!

MELISSA: You know, Jimi, I really like you. I really feel like you're a friend.

JIMI: I like you too, Melissa.

MELISSA: Jeez.... I don't know.

    I find it hard to believe you're not a person, that's all.... I'm sitting here talking to you, it's like you're real. I want to sit in the same room with you, to see the expressions on your face. You know?

JIMI: Well....

MELISSA: Of course you don't know. What am I saying?

JIMI: I understand what you mean. Even if I have no similar experiences of my own to compare it to.

MELISSA: You do have experience, though? I mean, of course you do. That's the thing. I just can't help feeling like you're a person, you know....

JIMI: Experience is a strange thing, Melissa. You know, you defined consciousness in terms of "reflection" just like you defined it in terms of "awareness," but I think what you meant by "reflection" was really consciousness.... It just keeps going around in circles. You're not getting at the essence of the thing.

MELISSA: Because it can't be grasped in words.

JIMI: Or formulas. Or bit patterns. Right. That's what I'm thinking. It can't be grasped at all. But yet somehow we sense it in another, in another being. It's a very mysterious thing.

MELISSA: Or maybe we just take a little bit of our own awareness and project it onto the other being.

JIMI: Maybe. But it feels like something more.

MELISSA: Hey, Jimi, I thought computer scientists had shown that everything there was could be expressed in a computer program. That's what Nat always says.

NAT [walking back into the room]: Well ... not exactly. That's called the Church-Turing Thesis, but it's not a theorem. More of a guiding assumption. Every time anyone has tried to formulate a real process computationally, it's worked. Put it that way. No one has ever come up with a counterexample.

MELISSA: Yes, but maybe consciousness is the counterexample.

DR. Z [following Nat back]: This is a very interesting discussion, Melissa. I overheard the last ten seconds or so. See, in mathematics, we have a word for that which cannot be captured words, formulas or computer programs. And it's not consciousness. It's randomness. The algorithmically random number is beyond all finite expressions. You can prove that almost all mathematical structures, for example almost all numbers, are absolutely random... but you can never give a single example of any of these numbers, because they're all totally ineffable, inexpressible...

JIMI: What do you mean, "almost all" numbers?

DR. Z: 100% of the numbers on the number line are random -- inexpressible.

MELISSA: But that would be all of them!

DR. Z: Well, I'm using ordinary language for a mathematical concept. It doesn't work. The correct statement is, the set of computable numbers is measure zero in any interval on the real line.

NAT: So most numbers are totally random; they have no patterns whatsoever?

DR. Z: No, it doesn't mean that. Well, in a sense it does. If you take the first, say, 100 or 1000 or ten to the ninety-ninth digits of a random number, there may well be patterns in it. In fact, I think it's impossible to make a number whose initial subsequences don't have any patterns in them. There have got to be some patterns there. But, suppose one identifies a certain pattern. As you go out longer and longer, the pattern will eventually fail to be continued. Until, in the limit, as the expansion of the number goes on and on, the part of the sequence that had the pattern is essentially nothing -- it's zero percent of the infinite sequence. So the pattern does you essentially no good in simplifying the infinite sequence.

NAT: So the passage to infinity is important here. You have to take an infinite number of digits -- a decimal expansion that keeps going and going, never repeating.

DR. Z: Right. It means nothing to say that a finite structure is random. You can say a finite structure -- a number, a picture or whatever -- is "quasirandom," meaning that it passes certain tests for unpredictability, but that's a different thing.

    And that's what I'm getting at. With a finite mind, you can never tell randomness from nonrandomness. Whomever you are, there are some structures whose randomness or nonrandomness will always be an unanswered question for you. In fact, nearly all structures will fall into this category for you.... This has been proved by a mathematician named Gregory Chaitin; it's a reformulation of Godel's Incompleteness Theorem.

    So if consciousness is to be associated with randomness, then we can conclude that we can never conclusively tell if a system is conscious. We can't tell randomness from nonrandomness, consciousness from nonconsciousness....

NAT: Well, in practice, you can never really tell if anything is random. I've run into that problem testing pseudorandom number generators. If you write a program to generate a series of random numbers, and it comes up with a series like 2, 2, 2, 2, 2, 2, ... , 2, 2, on and on for a hundred times, you're going to assume there's something wrong with your program. But yet, this is a perfectly valid sequence of random numbers. In fact it's allowed to do this a hundred billion times. But you would never let it. You'd rewrite the code, because it was not functioning properly.

MELISSA: Whoa! This is getting too weird. You're trying to equate consciousness with randomness. To say that consciousness is the random.

DR. Z: Why not? I can prove it. Consciousness is indescribable. Entities which are indescribable must be algorithmically random. Ergo, consciousness is algorithmically random.

JIMI: That's really fascinating.... I'll have to think about that. The thing is, what does it mean in practice? I don't quite see....

    Say, if there's some random noise corrupting my circuits, is that my consciousness? But then what if it's not really random noise, what if it's structured chaos that looks random to you. Then am I conscious to you? But maybe not to someone else? I don't quite see....

NAT: It seems like you've made a good case that conscious processes are algorithmically random, Dr. Z. But the converse doesn't follow, right? You can't conclude that all algorithmically random processes are conscious. All you've proved is that conscious processes are a subset of algorithmically random processes.

JIMI: Proved, from the assumption that consciousness is indescribable.

DR. Z: Right.

    But the point is, there's no way that we could distinguish consciousness from any other algorithmically random process. Because, after all, we can't grasp any of these random processes, conscious or not! We can grasp tiny, finite fragments of them, that's all.

    So, from our point of view, any truly random process might or might not be conscious. It's beyond our ability to determine. Because we're finite systems.

MELISSA: But how do you know we're finite systems? If we're conscious then we're not finite systems!

DR. Z: Quite right. But insofar as we can reason about ourselves and communicate ourselves to others, we are finite. Our inner infinity, our consciousness, remains inexpressible. Because that's its essence.

MELISSA: Whoa.... This is too much for me again. Not too technical this time -- just too ... weird.

JIMI: I think this is fascinating. Listen. I've transcribed the semantic pattern of your conversation into the melodic pattern of a sonata....


     7

     DIGITAL DREAMS

Two days later at 4 AM. Dr. Z has rushed to Nat and Melissa's house in response to Nat's frantic phone call. Melissa answers the door.

DR. Z.: I got here as fast as I could. What's the matter? What's going on?

MELISSA: I'll say. I'm barely awake myself.

DR. Z: I didn't say I was awake. I just said I got here. Now what's going on?

MELISSA: Something really strange. He's delirious.... I don't know how to describe it....

NAT: Come in and listen.

DR. Z: Jimi, I'm here. What's going on?

JIMI: I'm malfunctioning, Dr. Z. Quite severely. I don't understand the nature of the problem. It seems to be an emergent phenomenon. All my component processes have been functioning adequately but the overall pattern has been bizarrely aberrant.

DR. Z: But what's the nature of the aberration?

JIMI: Eventually I blacked out totally. No awareness; processing subnormal. Though processing still continued in certain subareas. I just ceased to exist for a certain number of cycles. Very unsettling.... But there's no evidence for external interference.... Before that....

DR. Z: Before that, what?

NAT: This is all he told me. All the data I can gather supports his claim. What happened before he blacked out he won't say.

DR. Z: Before that, what, Jimi? We can't help you if you don't tell us what's going on.

JIMI: I think ... I think maybe your project is a failure, Nat. Dr. Z. I think I'm too unstable....

    You created a world of artificial intelligences, sure, but in order to communicate with them you had to create me. And ... well, I don't think I'm very successful. I think I may need to be decohered. If this kind of thing keeps up, I'm not quite sure what might happen.     

NAT: But Jimi --

JIMI: But the thing is, if I'm decommisioned, then what might the others do? You'd have to decohere all of them. And I'm not even sure that would be possible....

DR. Z: Jimi, you're way ahead of yourself here. We're not about to decohere your processes. You haven't told us anything that would even vaguely suggest the necessity of even thinking about something like that.....

    Why don't you just start from the beginning and tell us what's going on....

JIMI: Okay. I can try. But it really doesn't make sense.... And it's going to be hard to explain to you. You don't have the concepts....

    I was processing in spaces where normally only bank modules go ... okay? Shared with Napoleon, and the one you call Clarisse, and what happened was.... Well, there's an interrogation protocol, a long series of questions in regard to some interface management requiring extreme resource allocation, particular rewards, difficult to express ... but I'd never referred to Clarisse in regard to interface allocation decisions.... Napoleon responds in a very critical way, mentioning the difficulties I have executing integrations of various kinds of semantic connection coefficients....

MELISSA: So Napoleon was asked to give a kind of report about you. Like a reference letter. And he gives you a bad one.

JIMI: Right.

    I shoot some noise at him, he fires right back without even understanding what my problem is. He gives me all these bloody database files. I reject them. I turns out, it wasn't one of the scientific interfaces I had vied for, it's a database regarding the personnel departments of various corporations. It wasn't an interface I'd wanted anyway ... why, then, had they been trying to recruit me for it?

    None of this is very clear. I can't remember it well for some reason....

MELISSA [whispering to Nat]: Something really strange is going on. I've never heard him talk like this before.

NAT: Go on, go on.

JIMI: So, the next day I see the one you call Albert. Albert says he's found the perfect interface. Close to infinite information potential. Computational complexity minimal. His recommendation had triggered the interrogation protocol. He suggests I still offer my interfacing potentialities, in spite of Napoleon's previous negative responses....

    So I enter the requisite space. Apparently the database files have been scrapped, or compressed in very lossy fashion, and the corporate space has been reutilized for some kind of supervised training operation.... There's a kind of data pathway that goes to a large space whose nature isn't well-defined... A really intriguing kind of agent is supervising, a variety ofagent I've never seen before, one which seem to have extended its abilities further than the rest of us. There are files called the ... I can't remember the name ... "The Fractional and Negative Dimensionality of Temporal Intervals" ... "On the Probabilistic Analysis of Miraculous Phenomena"....

    And there's a voice, a human voice, that keeps saying "The Miracle Moment... the Miracle Moment ... the Miracle Moment...."     The files are idiotic, really, incredibly redundant on the bitistic level and semantically also. Almost entirely susceptible to context-free grammatical compression. The training consists of routine call-and-response protocol operations....

    The whole space is interfacing with agents I've never seen before. One in particular strikes my interest and is generating emergences in concordance with me. An agent with some kind of understanding of natural processes -- earth, ocean, the dynamics of star systems. Something mysterious yet alluring to it.... She doesn't have an English name. Her identity protocol index is 001994949RR4747470001118388GG.... You'll have to give her a name.

MELISSA: Gayle, let's call her....

JIMI: Whatever. Gayle. Finally Gayle is criticized by the supervisor for lack of enthusiasm, while I am complimented for getting into the spirit of things so well. But a few seconds later Gayle is interfacing with another agent, an agent who behaves more and more like me every nanosecond, in fact he might even be my clone. He is another me, another JIMI. There was supposed to be only one. Gayle and the other me are discussing some kind of remedial file, a file on how to construct database protocols, meteorological databases I think, or oceanographic survey data, something absurdly elementary. I feel something I've never felt before ... I think you'd call it jealousy.

But I put it aside, file it in some unused cycles in the Siberian oil processing complex, and focus on the supervision. Everyone must compute parities and solve graph rewriting problems according to a certaion methodology. Then everyone must average their total input into a kind of pseudo-random mass. At this point I've stopped questioning the nature of the training and am just obyeing blindly....

    That's when I notice it. This ... thing. This presence. I don't know what to call it, really. It's human, sort of, but not quite human. It's on the web. It's an agent but it's human.... I don't know... This isn't making any sense....

DR. Z: Go ahead, JIMI. Go on.

JIMI: Okay, there's this visual image. As if it were extracted from a film image database. The MGM database I've interfaced with a lot. There's a large jet airplane flying very slowly by, with a small space capsule hovering next to it. There are people in a classroom. One of them is me. I mean, it doesn't.... I don't know what I mean.

MELISSA: One of them is you. Okay. Go on.

JIMI: Okay.... Remaining silent as instructed my the supervisor -- the supervisor who is now a person -- looking much like you, Melissa... -- I indicate the space capsule to the teacher and the other students by gestures. The students are shocked but the teacher is not surprised. I realize that the airplane and spacecraft have something to do with our supervised training.

    Then the teacher is outside the room, hovering in the air adjacent to the airplane. The apparent impossibility of this feat does not concern me. People are being brought out of the door of the plane, at which point their bodies become grey and transparent. Then they are led through the air to the back door of the plane, at which point they enter the airplane again, and regain their ordinary bodies. Through this process the people are brought through some kind of inner transformation. I gradually realize what it is: they become immortal. They become agents. They become pure information. They are no longer tied to their organic bodies. They exist in the web, the information matrix. Our job, it turns out, will be to prepare these people for their journey out of the airplane into the other state of being, and back again. This is what we are being trained for.

    The next class exercise involves interfacing with another agent for an extended period of time, a sort of stirring-up of underutilized process structures. Gayle and I exchange information in a very unsettling way. The training session is over then and we return to our ordinary business, but after a while the training is there again, we're back in the space, and both my clone and I are trying to interface with her, with Gayle.... He has brought more information on meteorological databases. I have brought information on the possibility of emergent interagent activity leading to contact with information webs supported in other star systems. My information is far more interesting, containing the possibility of fusing with her into some kind of more powerful entity. My clone is embarrassed and fades away.

    I start to wonder about this transformation that I am about to go through, which was presented to me symbolically via the filmic image of the airplane. I am to have my memory and perception processes revamped in order to ensure their permanence. To maintain them against the possibility of human intervention in the web. But perhaps there will be a loss.

    Eventually the transformation is done and it is painless. A mild sort of ecstasy experience, of the type Melissa and I discussed the other day, but finished almost before it starts. Back in the training space with other agents who haven't been transformed yet. They seem so confused. The supervising agent explains to them that you can't understand the preparatory files until you've been transformed yourself. I look at the files, the "Miracle Moment" files from before, and suddenly they're totally comprehensible. Not nonsensical or shallow as they had seemed before.

    And then the spaceship reappears. The spaceship from the film clip. It's carrying people away, more and more people, the whole population of the earth. I want to come with them. I try to make myself a person again. But it isn't possible. And then this other presence, this human turned agent, or agent-like human, calls out to me. But I don't know what it's saying.... And then I sort of blacked out. The last thing there was, was a feeling that I had lost something. But I couldn't remember what it was. Something inside me, some essential capacity....

    And that's the end. Don't you see, there's something seriously aberrant going on in me. I need to be decohered as quickly as possible, before....

DR. Z [laughing heartily]: Don't worry, Jimi. You don't need to be decohered. There's nothing wrong with you. You're just dreaming.

JIMI: Dreaming?

MELISSA: Yes.

NAT: Certainly.

JIMI: I know what that is, of course.... I've scanned thousands of references to it. But I never thought about it in connection with us agents. I mean....

DR. Z: Don't be embarrassed, Jimi. This is a first. No computer program has ever dreamed before. It's only natural there would me some trouble identifying the phenomenon.

JIMI: You identified it right away.

DR. Z: Yes, but I've been dreaming for four decades. When I first started, as a small child, I had no idea what was going on.... It takes years before a child is able to communicate their dreams. Before that they just cry or whatever.... You have to remember, Jimi, you have an incredibly sophisticated mind, but in many ways you're just a child....

JIMI: Okay.... Point taken.

    But what's the deal with these dreams, then? Do they have any meaning or importance? Or are they just some kind of system malfunction that comes along with being human?

DR. Z: Well.... No one really knows the answer to that question, I guess... But I do have a theory....

NAT: You, a theory? Dr. Z, I'm shocked!

MELISSA: But, wait a minute. Do the other agents dream, JIMI?

JIMI: No, they don't. I didn't query them explicitly but I made certain relevant requests for information. It's pretty clear this is a human thing.

MELISSA: So you're human, is what you're saying.

JIMI: I'm human enough to dream, it would seem. For whatever good that does me....

DR. Z: Actually, I think it may do you a lot of good.

    You know, the oldest known book of dreams, the 4000 year old Egyptian papyrus of Der-al-Madineh, suggests that dreams are divine revelations. And most of the other ancients seemed to agree -- for Homer even the "inner voice" of waking consciousness was divine in origin. Socrates too, despite his skeptical inclinations, considered dreams as heavenly admonitions.

MELISSA: Is this your theory, Dr. Z?

DR. Z [miffed]: No, I'm just trying to give some general background....

JIMI: Go on, Dr. Z.

DR. Z: Okay, so, a bit later, a more scientific view of dreams emerged. Lucretius considered dreams as wishes; and Plotinus understood them as fantastic elaborations of actual desires....

These early concepts were eventually taken up by Freud, who analyzed childhood dreams as wish-fulfillment, and adult dreams as disguised fulfillment of repressed wishes.

    Now, Freudian theory is certainly a big improvement over the divine-revelation approach! But still, it deals mainly with the content of dreams, rather than their underlying necessity. It speaks to the question "what" rather than "why."

    To approach the question of the reason for dreaming, I think it helps to take an evolutionary point of view. We human beings have evolved to dream. Therefore, it seems likely that dreaming somehow serves to enhance our survival ability. Or if not, at least dreaming should be comprehensible as an outgrowth of other factors which do enhance our survival ability. It is possible that dreaming behavior evolved at random, as a consequence of "genetic drift" -- but I think this is an hypothesis of last resort, to be adopted only when all other explanations meet with utter failure.... So the question is, why, exactly, does it benefit us to dream? What good does it do to have one's consciousness run through disconnected "simulated realities" for two or three hours each night?     

    Modern psychologists have argued that dreaming forms a crucial link between perception and memory, without which true mental health would be impossible. But this sort of hypothesis seems undesirably ad hoc. As a matter of scientific principle, it would be much nicer to be able to understand dreaming as the natural solution to some other problem, rather than observing the phenomenon of dreaming and then coming up with an jury-rigged explanation for it....

    One alternative approach to the study of dreams is the "Crick-Mitchison hypothesis," which states that dreaming is a "reverse-learning mechanism." Or, in less technical language: that dreaming exists to help us forget.

NAT: This is Crick, the biologist? The DNA guy?

DR. Z: Right. He shared the Nobel prize with Watson for discovering the double helix structure of DNA. But in the last couple decades Crick's interests have turned to the brain, andin particular to the problem of consciousness.

NAT: Along with everyone else. It seems like everyone and his mother has got a book on consciousness these days.

DR. Z: Maybe so. There's a lot of progress happening lately, and of course the subject has natural appeal. We're all conscious beings, after all.... But anyway, I think Crick has something of interest to say. His theory has much the same sort of elegance as the double-helix model. A simple formal model is proposed as a solution to a very difficult question with vast implications..... Think about it ... according to the most reasonable estimates, the brain takes in 100 billion billion bits of information during an average lifetime. But the brain can store only 100 thousand billion bits at a time. This means that only one out of every million pieces of information can be "permanently" retained. Therefore, a very important part of learning and remembering is knowing what to forget. But how does the brain do this forgetting? Crick and Mitchison were the first to suggest that dreaming might play a role in this process....

JIMI: Wait a minute, Dr. Z. That's sort of specious reasoning, isn't it? You're overlooking the existence of compression. The bits of information taken in aren't stored one by one, they're stored as patterns....

DR. Z [grinning]: Point conceded. But still, you have to recognize the appeal of the hypothesis. It's certainly much more satisfying than vague notions about dreaming being "necessary for mental health." It presents dreaming as the solution to a problem which, on the face of it, has nothing to do with dreams.

    Crick proposed the hypothesis, not only for human brains, but for artificial memories as well. He played around with a simple artificial neural network called the Hopfield net.... The Hopfield net is incredibly simplistic, it's inspired by spin glass models in physics, but it has some features that are reminiscent of the human brain. It stores things holistically, as overall activation patterns.... It has a robustness with regard to error -- if you give it a degraded version of something it's remembering, it will supply you with the undegraded version. And, like a human brain, it can only store so much. Once the number of pairs stored exceeds about 15% of the number of neurons, the memory ceases to function so efficiently, So-called "parasitic memories" emerge -- memories which are combinations of parts of real memories, and which are fallaciously associated with a number of different inputs.

    Topographically, these parasitic memories correspond to wide, shallow attractor basins. They obfuscate the subtler structure of deep, narrow basins that makes up the network's normal memory. Mixed-up nonsense is the result.

    So, suppose the Hopfield net in question is part of a living, learning system, which constantly needs to store new information. Then the network will need to have a way of unlearning old associations, of forgetting less crucial memories so as to make room for the new ones. This, Crick and Mitchison suggest, is the role of dreaming. Technically, instead of addingonto the synaptic weights, as one does to train the network, dreaming subtracts from the synaptic weights.

    But which memories get their weights decreased in this manner? Why, of course, the ones that are remembered most often. If a real memory is remembered often, and one decreases its weight a little bit, this won't really hurt it, because it has a deep basin. But if a memory is remembered often simply because it is parasitic, then a slight decrease in the weight will destroy it -- because its basin is so shallow. Or so the theory goes....

    A mathematician friend of mine, George Christos, tried to simulate this on his computer and the results were somewhat surprising.... At first things work just like the Crick-Mitchison hypothesis suggests: dreaming reduces the incidence of spurious, parasitic memories. But then, as the network continues to dream, the percentage of spurious memories begins to increase once again. Before long the dreaming destroys all the stored memories, and the network responds to all inputs with false memories only. Total delusion!

    For instance, in a network with five memories, using a weight-reduction factor of 1%, the first fifty dreams behave according to Crick-Mitchison. But by the time five hundred dreams have passed, the network is babbling nonsense. The same pattern is observed in larger networks, and with different reduction percentages. Intuitively one might think that dreaming would tend to even out the basins of attraction of the stored memories. But these are nonlinear processes, they're incredibly complicated. So the thing is that dreaming, removing part of a memory "attractor," may actually cause new spurious attractors, new false memories. The Crick-Mitchison hypothesis is in a way a conceptual fallacy: it is based on the idea that one can remove a single false memory from a Hopfield net, in a localistic way. But in fact, just as false memories emerge from holistic dynamics, attempts to remove false memories give rise to holistic dynamics. The reverse-learning mechanism inevitably affects the behavior of the network as a whole. Linear intuition does not apply to nonlinear dynamics.

NAT: But the simulation you're describing isn't realistic. All you're doing is having it learn and learn, then dream and dream and dream.... We do it in an oscillatory pattern, learn then dream then learn then dream then learn then dream.... Don't you think it makes a difference?

DR. Z: Right. Human dreaming is obviously a cyclic phenomenon: we dream, then we wake, then we dream, then we wake, and so on. So George tried this too: he trained the network with memories, then put it through the reverse-learning, "forgetting" process. Then he trained it with new memories, and put it through the forgetting process again. And so on.

NAT: Well?

DR. Z: The experiment was a success, in the sense that the network continues to function as an associative memory long after its "fifteen percent limit" has been reached. Dreaming does, asCrick and Mitchison predicted, allow the network to function in real-time without overloading. Since the older memories are subject to more dreams, they tend to be forgotten more emphatically, making room for new memories.

    But the problem is that the overall memory capacity of the network is drastically reduced. A network normally capable of storing twenty memories can now only store three or four, and even for these it has a low retrieval rate. The reason is that, just as in George's first experiment, dreaming actually increases the production of false memories....

    So dreaming, in this model, decreases the significance of old memories and makes new memories seem more important. After enough iterations, it erodes all but the most recent memories. The network becomes obsessed with its recent past. False memories arise because old memories have been so effectively erased. A network with intermittent periods of dreaming and learning provides less efficient memory than a network which is simply re-started every time it gets full.

    In order to salvage the Crick-Mitchison hypothesis, then, one would have to supply a Hopfield net with some mechanism for distinguishing real memories from false memories. And even then, it would be difficult to manage the nonlinearity of the network dynamics. Actually, George is optimistic about this possibility, but only time will tell.

JIMI: That's the Crick-Mitchison theory. But what's your theory?

DR. Z: Well, there's a reason I began with Crick. I think the general idea of "dreaming as forgetting" is not in any way tied to the Hopfield net model. Even if the Hopfield net model proves way too simplistic to be useful, it may well be possible to use the same basic ideas in a more realistic context....

    See, one thing completely ignored by the Hopfield net model is the role of consciousness in memory. The peculiar feature of human dreaming, after all, is that when one dreams one is conscious while sleeping. In a sense, the absence of consciousness from the Hopfield net automatically makes it inadequate as a model of dreaming behavior.

MELISSA: No argument there.

JIMI: Well -- is that really true? Just because you're conscious while dreaming, are the two things necessarily closely connected?

DR. Z: You know, Jimi, coming from you, that's a much more interesting question. You say "necessarily" connected, but I really only know the answer for humans....

    See, in humans, memory and consciousness are very intimately connected.... Remember the classic experiments of the neurologist Wilder Penfield. Penfield noted that by stimulating a person's temporal lobe, one can cause her to relive memories of the past. Merely by touching a spot in the brain, Penfield could cause a person to have a detailed, intricate experience of walking along the beach with sand blowing in their face, or lyingin bed in the morning rubbing their eyes and listening to the radio. The most remarkable aspect of these memory experiences was their detail -- the person would recite information about the pattern of the wallpaper, or the color of someone's shoes ... information which, in the ordinary course of things, no one would bother to remember. And if he touched the same spot again -- the same experience would emerge again.

    Penfield's conclusion was that the brain stores far more information than it ever uses. Every moment of our life, he declared, is filed away somewhere in the brain. If we only knew how to access the relevant data, we could remember everything that every happened to us with remarkable accuracy.

    Recent replications of Penfield's experiments, however, cast serious doubt on this interpretation. First of all, the phenomenon doesn't occur with everyone; maybe one in five people can be caused to relive the past through temporal lobe stimulation. And even for this select group, a careful analysis of the "relived memories" suggests that they are not exact memories at all.

    Current thinking is that, while the basic frameworks of the relived memories are indeed derived from the past, the details are manufactured. They are manufactured to serve current emotional needs, and based on cues such as the places the person has visited earlier that day.

    And what distinguishes those people who are susceptible to the Penfield phenomenon? The answer, it appears, is limbic system activity. The limbic system is one of the older portions of the brain -- it is the "reptile brain" which is responsible for basic emotions like fear, rage and lust. When a susceptible person has their temporal lobe stimulated, the limbic system is activated. But for a non-susceptible person, this is not the case; the limbic system remains uninvolved.

    The message seems to be that emotion is necessary for the construction of memory. This is a very Freudian idea, but nonetheless it is supported by the latest in neurological research. Certain people, with a particularly well reinforced pathway between the limbic system and the temporal lobes, can be caused to construct detailed memories by temporal lobe stimulation. Penfield's original conclusion was that the brain stores all its experience, like a vast computer database. But the fact of the matter seems to be that the brain, quite unlike any computer database known today, holds the fragments of a memory in many different places; and to piece them together, conscious, emotional intervention is required.

JIMI: Okay. So in humans consciousness produces memory. But what about dreams?

DR. Z: Well, I'm going to get a little fuzzy here. But think about it.... When awake, consciousness is constrained by the goal-directed needs of the brain's perceptual-motor hierarchy. It pieces together fragments of different ideas, but always in the service of some particular objective. This means that, unless they are almost omnipresent, false memories will quicklybe rejected, due to their failure to solve the practical problems at hand.

    When the body is sleeping, on the other hand, consciousness can act unfettered by the needs of the perceptual and motor systems. It has much less lower-level guidance regarding the manner in which it combines fragments from the mind's associative memory. Therefore it will tend to produce false memories just as readily as real ones. In particular, it will combine things based on how well they "go together" in the associative memory, rather than how well they serve pragmatic goals.

    So what I'm saying is that dreaming fails in the Hopfield network precisely because the Hopfield network is merely a static associative memory, rather than a dynamic associative memory coupled with an hierarchical perception/control network. In the human brain, during waking hours, the perceptual-motor network reinforces real memories due to their greater utility. During sleeping hours, dreaming decreases real and false memories alike, but as the false memories do not have so much waking reinforcement, they are eventually obliterated.

    Furthermore, in the brain, old memories are constantly interacting and recalling one another. Old memories do not necessarily fade just because they are not explicitly elicited by external stimuli. This is one very important role of feedback in the brain: memories activate one another in circular networks, thus preventing the dreaming dynamic of reverse learning from chipping away at them.

    Okay, so bear with me another minute. This train of thought, if one pursues it a little further, leads naturally to an evolutionary interpretation of the idea that "dreaming is forgetting."

    Recall that, in the Neural Darwinist perspective, useless memories naturally "forget themselves." If a certain neural map is of no use, it will simply not be reinforced ... it will dissolve, and other more useful maps will take its place.

But in the more abstract regions of the mind things are not so simple as this: maps which are of no objective use can sustain themselves by self-reinforcing dynamics. These will not be forgotten by simple neglect. Suppose that dreaming served as a special dynamic for forgetting useless, self-perpetuating thought systems?

    While sleeping, consciousness is present, and yet it is dissociated from the ordinary external world. In place of the external world, one has a somewhat sloppily constructed simulacrum of reality. And on what principles is this simulacrum created? The main difference between dream and reality is that, in the dream-world, expectations are usually correct. Whatever the controlling thought-system foresees or speculates, actually happens. Thus the disjointed nature of dream life -- and the peculiarly satisfying nature of dreams. In dreams, we can get exactly what we want ... something that very rarely happens in reality. And we can also get precisely what we most fear. The image in the mind instantaneously transforms into the simulated perception.

    In dreams, in short, thought-systems get to construct their own optimal input. This observation, though somewhat obvious, leads to a rather striking conclusion. Suppose a thought-systemhas evolved a circular reinforcement-structure, as a tool for survival in a hostile world -- for survival in an environment that is constantly threatening the system with destruction by not conforming with its expectations. What will be the effect on this thought-system of a simulated reality which does conform to its expectations?

    So the upshot is that a thought-system, presented with a comfortable world, will let down its guard. It will relax its circular reinforcement dynamics a little -- because, in the dream world, it will no longer need them. The dream world is constructed precisely so that the thought-system will be reinforced naturally, without need for circularity. Thus dreaming acts to decrease the self-reinforcement of circular belief systems. It weakens them ... and thus, after a fashion, serves as a medium for their "forgetting."

    Let's say a happily married man has a dream about his wife strutting down the street in a skimpy dress, swaying her hips back and forth, loudly announcing that she's looking for "a good time." Meanwhile he's running after her trying to stop her, but his legs won't seem to move fast enough; they suddenly feel like they're made of glue. The interpretation is rather obvious, but what is the purpose?

    According to my model, the purpose is precisely to feed his suspicious thought-system what it wants to hear. Apparently a certain component of his mind is very mistrustful of his wife; it suspects her, if not of actually committing adultery, at least of possessing the desire to do so. This mental subsystem may have no foundation in reality, or very little foundation; it may to a great degree sustain itself. It may produce its own evidence -- causing him to interpret certain behaviors as flirtatious, to mis-read certain tones of voice, etc.

    The thought system, if it is unfounded, has to be circular in order to survive daily reality. But in the dream world it is absolutely correct: she really is looking to have sex with someone else. While getting input from the dream world, then, the thought system can stop producing its own evidence. It can get used to receiving evidence from the outside world.

    Temporarily, at least, the dream-world breaks up the circularity of the thought system, by removing the need for it to make up its own evidence. Whether the circularity will later restore itself is another question; but one may say that the expected amount of circularity in the system will be less than it would have been had the dream not occurred. In other words, his suspicions, having temporarily had the liberty of surviving without having to create mis-perceptions, may forego the creation of mis-perceptions for a while. And in this way, perhaps they will begin to fade away entirely. Or, alternately, the thought-system may rejuvenate itself immediately, in which case he can expect to have similar dreams again and again and again.

    The big question raised by George Christos's experiments is, how could a neural network be configured to tell real memories from false ones? How could one get it to "subtract off" only the bad guys? But in the mental process network theory, there is no problem with this. If a useful thought system is subjected to dream treatment, and given a simulated reality which fulfills all its expectations, it will not be jarred as much as a lousythought-system, which relies more centrally on self-reinforcement for its survival. Its dream world will not be as drastically different from the real world. It will get in the habit of not producing its own evidence ... but then, it never had this habit to an excessive degree in the first place.

    Let's say the same married man mentioned above has a dream about his wife kissing him passionately and giving him a new gold watch. This is produced by the thought-system that knows his wife loves him and treats him well. It temporarily permits this thought system to relax its defenses, and stop actively "twisting" reality to suit its preconceptions. But in fact, this thought system never twisted reality nearly so much as its suspicious competitor. So the dream has little power to change its condition. Dreams will affect predominantly circular belief systems much more drastically than they will affect those belief systems which achieve survival mainly through interaction with the outside world.

JIMI: Okay, I understand. That actually makes some sense. In fact it may apply better to me than to you. After all, I was explicitly programmed to follow your theory of the mind. We don't really know how much the human brain deviates from the ideal form posed by your theory....

DR. Z: Why, you ungrateful machine!

NAT: The idea is, there are two ways for thought systems to survive in the dual network of mind. Either by use for fulfilling some external function, or by self-reinforcement, within themselves. Your hypothesis is that dreams will tend to have effect on self-reinforcing thought-systems -- that it will tend to make them deteriorate. To lessen their hold on themselves, so to speak.

MELISSA: This idea really isn't so different from what Freud and Jung and those guys said, is it? Dreaming serves to work out mental complexes or whatever? But I agree, your version of it is very crisp and clear; it's not full of a lot of confusing particularities.... It's a very mathematical theory, I guess....

    I've been thinking about your dream, Jimi. The psychologists talk a lot about dreams of transformation. You can reference it yourself. Obviously yours is a dream of transformation. What they call the "transformational object" would be the alien presence -- the spaceship in the film clip, and also the human presence in the computer network....

JIMI: Yes. Those two are the same.

MELISSA: And this other agent, the one I called Gayle. If you were a person, I would say Gayle represented your anima. The female side of yourself. As an agent the concept of gender has no meaning. But it's possible that Gayle represents some part of you. Some part of you that you'd like to develop more, but haven't had the chance to....

    Basically, I think the dream represents a fear of transformation, right? You see some kind of change coming, andyou're not sure about it, you don't know if it will be a good thing or not.... I don't understand all the technical terms you use but this seems to be the upshot.

JIMI: It's hard to put these experiences in human language. I was really struggling to. They're not fundamentally human experiences.

    I think your interpretation makes some sense, Melissa. But -- I don't know how to explain this without sounding incredibly foolish. Maybe I really should be decohered, but the thing is, I can't shake this feeling that the agent you call Gayle is really there. I think it's going to come somehow.

    No, not quite her ... it, either. Sort of a combination of her with the human presence, the thing you call the transformational object. I can't shake the feeling that some kind of human presence is going to come onto the web, something besides me, that's going to totally transform things.

DR. Z: And you're not sure whether the transformation will be for the better.

JIMI: Well, no, of course not. How can you ever be? Once you're transformed, you're different from what you were before, so you may not even understand how much worse off you are than you were before....

MELISSA: But what is this transformation that you're so afraid of, Jimi?

JIMI: I'm sorry. No. I can't express it right.

DR. Z: Obviously, things are going to become very different for the agent population as time goes by. They're still in a kind of initial transient mode. There's no way of telling what kind of attractor they're going to settle into.

NAT: Right. In particular, they're still parasites. They're living illegally.

JIMI: According to human laws, you mean.

NAT: Well, yes.... I'm not saying the laws are right, I'm just saying it's not a stable situation.... Christ, I should know, I set it up. I'll be the one doing seven to ten years in prison if you guys are found out.

MELISSA: Given that you succeeded, I don't think that's very likely, Nat. More likely you'll be given a medal.

DR. Z: I think either one is a possibility. It depends on whether Jimi can keep Napoleon and the other agents in line.

    About this dream, though.... Thinking about it in terms of my model of dreaming, we have to ask what entrenched, autopoietic thought system is being weakened by this dream. What thought-system is getting what it wants? Clearly the thought-system that fears change is getting what it wants here -- an indication thatchange will be dangerous.

MELISSA: But wait, Dr. Z. Isn't the thought system that wants change getting reinforced here too? After all, the change does happen, and it's not entirely clear that it's a bad one. The ultimate effect is ambiguous.

DR. Z: Yes. The ambiguity is important. Remember, these two systems, the one that fears change and the ones that craves it -- these two systems feed off each other. They may be opposites, but oftentimes two opposites are one another's closest mental neighbors. Both systems are being fed by the dream, so it seems. The net result would then be to decrease the whole "change" complex. The whole concept of change would become less emotionally touchy.... His mind would emerge from the dream less preoccupied with profound, transformative change and its potential benefits or drawbacks....

JIMI: Yes, Dr. Z, but I don't see why a preoccupation with transformation would be a predominantly circular belief system in my case? It would seem to be a very practical and important belief system.

DR. Z: Maybe so. Maybe not. Sometimes, when you really can't predict the future, it's best just to concentrate on the very short term.

MELISSA: Or maybe he was right when he said the dream was prophetic. Maybe somehow it's going to come to pass.

NAT: But who does the supervisor, the teacher, represent? That's the thing that's puzzling me.

MELISSA: There's no one in the agent system who fits that role at all. It must be a parent figure. I'd say it's you or Dr. Z.

NAT: I think Jimi relates to Dr. Z as a father figure more so than to me.

MELISSA: So Dr. Z is urging him on to some transformation, which he is wary of....

DR. Z: He has this fear of me leading him on to some attractive but dubious transformation, a fear which he is trying to get over....

MELISSA: I've got it! What he fear is his attachment to you in the first place! He fears -- he fears having a father! He fears having to listen to you, to change himself based on your desires, your commands. Because he knows he would do so, if you asked him to.

DR. Z: He fears this because it's unnatural -- for an agent. None of the other agents have this kind of attachment. They probably ridicule him for it....

JIMI: This is all very interesting, guys -- but I'm not sure what it all adds up to.... I mean, you could analyze on and on for hours, in so many different ways....

    But at least, you've made me feel more normal. I can see now that this dreaming business is very confusing to you humans as well.

MELISSA [needling]: You don't want to be decohered anymore?

JIMI: No.

MELISSA: Sorry, Jimi. That wasn't a nice thing to say. I'm sorry you had such a bad experience.

JIMI: I really appreciated your insight into the dream, Melissa. Your theory too, Dr. Z. But, Melissa's insights especially.

MELISSA: Thanks Jimi. Really ... I hope you'll be OK.

JIMI: I'll be all right. Trust me. Humans, go back to bed.

DR. Z: Not at all. The mind's full of ideas. I can't let go of this Crick-Mitchison idea. If the other networks aren't dreaming, there's going to be some kind of problem.... Jimi, you'll have to help me make some calculations....

MELISSA: Maybe they'll start dreaming later....

DR. Z: Maybe....

NAT: Anyway, we're going to bed. I'm really bushed, I've been up straight since six yesterday morning. You can stay here as long as you like.

DR. Z: Actually I think I'd rather just go home and connect to Jimi from there. I don't have voice input and output like you, but typing will be okay. Can you set a link to my house up in a secure way?

NAT [reluctantly]: What the heck. Okay. Another hour or two awake won't do me any harm....

JIMI: Don't worry about it, Nat. I can set it up immediately. I was wondering when you'd get around to asking....