Miraculous Mind Attractor Contents

Copyright Ben Goertzel 1995

4

FRACTALS, FRACTALS EVERYWHERE

Melissa and Nat's livingroom...

MELISSA: Hey, Dr. Z, this is sort of off the topic, but I keep thinking about fractals, as we're talking about everything else. While we're talking, I keep looking at that giant poster of the Mandelbrot set on the wall behind you. I know you explained before how these pictures fit in with what you're talking about, but I guess I'm just dense, I really didn't get it.

    I bought that poster on Telegraph Road about a year and a half ago, when I went with Nat to this conference in San Francisco.... And we got out this fractal movie from the video store a couple nights ago -- they play this really cheesy electronic music in the background -- it would've been better to just use Pink Floyd or something.... But the pictures are so amazing! It just keeps changing and flowing, one weird-looking shape after the other, so intricate with so many vivid colors.... It looks -- you know, it looks like some colony of intelligent micro-organisms from another solar system or something ... Visual sci-fi!

DR. Z: You know, I was very interested in fractals at one point, but they've sort of gotten tired for me by now. But you know me, I'm always willing to talk.... Although you may find I've developed a rather eccentric way of looking at fractals. I've begun to look at them from more of a psychological point of view....

NAT: Surprise me, Dr. Z. Seems to me you take a pretty eccentric view of everything....

DR. Z: Well, that may be. Better eccentric than crazy, I always say.... The difference, as is well-known, is the amount of money in your bank account.

    But if it's fractals the lovely lady wants, it's fractals she'll get....

    See my schtick about fractals is that they're just a special example of the build-up of hierarchical structure: structures out of structure out of structures....

    Think about it.... Imagine you're a new mind, coming into the universe -- you're not a tabula rasa, you have certain skills and predispositions, but you really have no clue what's going on. You have to build up everything from the beginning. What I'm saying is, when you start out in the world, you're not even in the world.... You don't have any idea of what's there and what's not.... Watching a hand pass behind a cloth, and reappear on theother side, she sees two different hands: one going in, and the other coming out.... It's a world without objects, a world without an orderly progression of time, and a world without thought as we understand it ... how can there be thought if there are no persistent entities to reason about?

MELISSA: Okay -- but what does that have to do with fractals?

DR. Z: Just give me a minute. So the infant is caught in a Catch-22: no objects without thought, and no thought without objects. But somehow she learns to perceive order anyway. Aided by her genetic endowment, and by the immense flexibility of her expanding brain, she recognizes patterns in the ongoing flux of sensations and actions. She makes associations: this sound usually accompanies something good. This mixture of sights, sounds, touches, smells and feelings is a something ... a person. These things are above, these things are below. This sensation, this light up above, means a person is coming....

    It's from these simple patterns, these elemental regularities, that our minds are constructed. We learn more and more, and at an exponential rate. Each thing we learn helps us to learn others; the process snowballs, the mind expands. And at each stage, the new things we learn are constructed from the old ones. The mind never stops to begin anew: it builds on what it has learned in the past. The first few words are often extensions of gestures; and the structure of early sentences is based on the structure of gesture communication. Adult relationships are constructed largely from habits learned in early childhood relationships. When learning to divide, one not only uses one's ability to multiply, subtract and add, one also draws on a certain facility with abstract procedures that can only be acquired through experience....

    The patterns that make up our world are constructed of other patterns. And this holds in the physical world as well as the mental world -- bearing in mind, Melissa, that this distinction is a facile one, as the "physical world" that we perceive is constructed by our perceptual and cognitive systems, and the "mental world" that we perceive is inseparable from its experiences in the physical world. The structures that one sees in simple animals often reoccur in more complex animals -- mutated, and arranged in different ways, but recognizable nonetheless. Why does the "brain coral" look so much like a brain? It's not coincidence, it's because the same self-organizing processes are at work -- only in the brain, they are combined with other processes to form a structure of much greater subtlety. Looking at the pattern of color on the mud flats of Yellowstone Park, one sees branching patterns remarkably similar to those of the blood vessels in the human eye. And this repetitiveness is not, to speak metaphorically, because the God has a strange sense of humor, but rather because the Creation Goddess is lazy -- she plagiarizes herself! The same pattern-processes are reused again and again and again. Having found a handful of efficient processes, she rearranges them again and again in hundreds of different ways....

    With all this building-up and reconfiguration of patterns, it is only natural that, in some cases, patterns should build upon themselves. This is the ultimate culmination of "divine laziness" -- or "divine cleverness," to take a more positive point of view. It is a kind of limiting case of natural structure, a case from which most of the messiness of the typical real-world system is eliminated. This special kind of structure is called a fractal. For, all the mathematics aside, a fractal is nothing more or less than a structure which is in some way built up from parts similar to itself.

NAT: Okay, fine, but that's all very general -- what does that have to do with the actual fractal pictures you see?

DR. Z: Well, fractal pictures are a symbolic representation of the deep hidden structures of our world. They're emblematic for the notion of a hierarchy of structures built from structures built from structures..... Look at it this way: their intricate, eternally variable structure mirrors the diverse complexity of our living world.... Fractals are beautiful -- but you should never forget that fractals are more than just pictures. The ideas underlying fractal images are every bit as beautiful as the pictures themselves....

MELISSA: Well, that's a matter of taste, Dr. Z....

DR. Z: The thing you have to remember is that, while to the average eye these are just beautiful, freaky-looking pictures, from the point of view of the mathematician, these are utter monstrosities! They're bizarre, almost paradoxical entities, which classical mathematics is totally unequipped to deal with.

Until the advent of computer graphics in the last few decades, these "pathological sets" -- these monstrosities -- were totally unknown outside the cloistered world of pure mathematicians. They were assumed, by the few hundred or thousand people in the world who knew about them, to have no connection with science or everyday life. But now we know that this assumption was wrong! These mathematical monstrosities are actually powerfully symbolic; they tell us as much about our world as the straight lines, circles and quadratic curves that we all learn about in high school....

NAT: The thing I find most mind-blowing is the concept of fractal dimension. The idea that something can have a dimension, like, between two and three -- this is totally amazing to me!

DR. Z: Well, the fractal dimension has probably been oversold. It's an interesting number, but it's really not the crucial thing.

NAT: No, I don't agree. Dimension is something everyone can understand. A point has zero dimensions, a line has one dimension, a sheet of paper has two dimensions (almost!), a cube, or a chair or a tree, has three.... All very orderly, veryprofessional -- a tidy way of structuring the world. We propel ourselves in a one-dimensional path every time we walk around; it is very easy to understand more dimensions as more "directions" to move in.

    Remember, the notion of dimension isn't innate or anything. It was originated by Descartes. He saw a fly zipping around in his room and it occurred to him that, at any point in its path, the position of the fly could be characterized by three numbers, what we'd now call the x, y and z coordinates.... Dimension is part of the modern scientific ordering of the world....

    But fractals, see, represent a level of structure deeper than the facile division of objects into one, two and three dimensions. At least that's how I think about. So I don't think it's trivial at all... The idea of dimensions like .5454 ... 2.34345 ... 3430.2124 ... 1.58496250 ... it's totally mind-blowing. Can you imagine moving around in a space of 2.6 dimensions. Feeling your body in that kind of space? It'd be totally amazing!

DR. Z: Well, Benoit Mandelbrot, Mr. Fractal himself -- the man most responsible for bringing fractals into popular culture -- seems to agree with me on this one. He's said that he regrets having overemphasized dimensionality in his early treatments of fractals. I had breakfast with him a few years ago and we discussed this at length.

    But I guess I see your point. What's interesting about fractals is not the fact that they have noninteger dimension, but the fact that they are complex structures composed of parts that are equivalent to the whole. But nevertheless, the presence of noninteger dimensions is powerfully symbolic.... It symbolizes the many deeper ways in which fractals -- and more generally, the recursive nesting structure of complex patterns -- disrupt the rigid, oversimplistic concepts into which we've tried to force our world.

MELISSA: I don't understand these dimensions at all. What does it really mean to have a dimension of 1.5 or something?

DR. Z: I couldn't give you a full explanation without going into a lot of advanced math -- but the basic idea is very simple. The key is in the hierarchical structure. Look at this picture, which is called the Sierpinski triangle. To figure out the fractal dimension of this you just have to observe that this shape is made up of three half-size copies of itself....

    This may not seem like anything to get hot and bothered about, but in fact it's a rather unusual property. Think about what happens if we take shapes like squares and triangles and try to construct them out of copies of themselves. A square is made of four half-size copies of itself. An equilateral triangle -- the same thing. A parallelogram -- the same thing.

    On the other hand, what about a simple line segment? It consists of two half-size copies of itself, right?. Cut it in half, put the two pieces together, and one gets the original segment back.

MELISSA: It doesn't work with a worm, but with a line segment -- no problem!

NAT: Right, and what about a cube? Eight half-size copies of itself.

DR. Z: Okay.    So if you ignore the Sierpinski triangle, and just look at nice shapes like line segments, triangles, squares and cubes, there is a pleasant formula here: To find out the dimension of an object, find out how many half-size copies of itself it consists of. Then multiply the number two by itself this many times.

NAT: Cool, I get it. A line segment is one-dimensional, and consists of two half-size copies of itself -- 2 to the power of 1. Squares and ordinary triangles are two-dimensional, and consist of four half-size copies of themselves -- 2 to the power of 2. A cube is three-dimensional, and consists of eight half-size copies of itself -- 2 to the power of 3.

    But what about the Sierpinski triangle? Three half size copies of itself -- we need to have two to the power of the dimension equals three. But two to what power equals three? The answer is, 1.58496250. This is the Sierpinski triangle's fractal dimension.... By similar reasoning this picture -- called the von Koch snowflake -- has dimension 1.26185.... This picture consists of four third-size copies of itself; its fractal dimension is the solution to the puzzle "three to what power yields four?"

MELISSA: So what the fractal dimension really measures isn't quite dimension at all -- it's a kind of "thin-ness".... The Sierpinski triangle is in a sense denser, more full of points, than the von Koch snowflake, thus explaining its greater fractal dimension. Is that all there is to it? It's not nearly so cosmic as it seems....

DR. Z: Sure, thin-ness.... That would be a fair way to put it.     Now, most shapes aren't so simple as squares, cubes and Sierpinski triangles; they can't be exactly formed out of miniature copies of themselves. But they can be approximated by miniature copies of themselves, and when one works out the mathematics for this, one arrives at the same conclusion. Most shapes have funny dimension properties! They are too thick to be two dimensional, or too thin to be three dimensional. Or, they are too thin to be two dimensional, but too thick to be three dimensional....

MELISSA: The thing is, "dimension" is a word with both mathematical and literary meanings. Away from technical people like you guys, a "dimension" is just an aspect of something, a way of viewing something.... So what we have here are, you might say, new dimensions for new dimensions!

NAT: Oh Liss, I love it when you're pithy....

DR. Z: So that's the deal with fractal dimensions. The real question is, what the heck do these numbers mean! Why should anybody care about them? Very different systems can have the same fractal dimension -- the question to be asked in such cases is whether this identify of fractal dimension hints at some deep similarity of process, or whether it's just coincidence.... There are cases of both. You know, if you plot someone's brain waves, their EEG, then you can tell whether they're sleeping or dreaming or awake by computing the different fractal dimensions!

MELISSA: What do you mean by the fractal dimension of a brain wave? A brain wave isn't a shape like these triangles and squares we're talking about....

DR. Z: See, here we get into the interesting part. The really fascinating thing to me isn't using fractal dimensions to quantify shapes, but using fractal dimensions to quantify dynamics....

MELISSA: You're obsessed with dynamics.

DR. Z: It's Heraclitus. The only constant is change. You can never step into the same river twice....

MELISSA: That's definitely true if the river is the Amazon. The piranhas will get you! But if you accept that a construct like a "river" is an abstract thing -- an emergent pattern rather than a particular configuration of molecules -- then the idea doesn't hold up. Patterns across space are just as valid as patterns over time. Why not?

DR. Z: Okay, you've got me....

    I guess I tend to overemphasize process because it's been underemphasized in Western science for so long. Psychology has almost entirely ignored dynamics, and even a science like physics, which is full of differential equations, has never gone beyond fixed points and limit cycles. Until very recently, that is. The subtlety of spatial structure has been known for a long time; the subtlety of of dynamics was well known to the ancient Chinese, but only just now is our science finally catching onto the idea....

    So, yeah, I'm obsessed with dynamics. But I have my reasons!

    Anyway, the point to be made regarding fractals and dynamics is that the connection runs two ways. First of all, hierarchical structures are usually associated with hierarchical processes of branching, development, growth.... And the dynamics associated with fractal structures are chaotic dynamics.... An example is the brain, which I believe has a loosely "fractal" structure of neural networks within neural networks within neural networks.... The dynamics of each of these subnetworks, on all the different levels, can switch back and forth between stable, periodic and chaotic modes.

    Or, the other way around, you can take something thatchanges over time, and you can make it into a shape, and look and see if this shape is a fractal. You can even take someone's EEG recording, and make it into a shape. The geometry of the shape tells you something about the state of the brain. You can make a two-dimensional shape by taking the EEG at one time as the x coordinate, and the EEG a millisecond later as the y coordinate. Then you can find the fractal dimension of this shape. And it comes out to be different for different states of awareness. That's interesting! By looking at fractal dimensions of shapes gotten from EEG, you can tell waking from dreaming; you can tell a cocaine high from normal awareness....

MELISSA: Wild!

DR. Z: Yeah. But still, it's a pretty crude way of measuring the structure of a changing system. It just gives you a single number -- it's not nearly as revealing as, say, inferring a language that represents the way the system changes....

    And the puzzling thing is, there are two possible sources for the fractal dimensionality of EEG. One is, it could come from the chaotic dynamics of the neural networks on some level in the brain. The other is, it could come from the overall fractal, recursively modular structure of the brain. In all probability it's some kind of combination of the two. So like always with EEG, you can't be sure what it is you're really measuring. You're just reading this electric wave passing through the brain at this particular point -- but the wave is generated by the combination of a number of different areas of the brain.... It's so damn complicated!

NAT: Okay. But still, you're sounding a little more positive about the fractal dimension here.

DR. Z: It's not that there's anything wrong with the fractal dimension of a picture. It can be a useful number. The thing is that there are lots of other numbers to characterize attractors pictures, besides the fractal dimension. There's something called the Liapunov exponent, which tries to gauge the amount of chaos in a system. This is a very important concept in chaos theory, because, although everyone agrees that lots of systems are chaotic, it's very hard to get a consensus on exactly what chaos is....

MELISSA: A chaos is a chaos is a chaos?

DR. Z: Ah yes -- all complex systems are chaotic. But some are more chaotic than others!

NAT: You guys are ridiculous.... Stop mangling quotations and tell me how you do measure chaos in a system. That sounds like a useful thing to know. I think my own mind is chaotic, I'd be interested to find out a way of testing the hypothesis....

DR. Z: The Liapunov exponent is an easy idea.... Think about itthis way -- under ordinary circumstances, when you throw a ball up in the air, the precise point at which you're standing doesn't make a heck of a lot of difference. If you move a little, the ball will land at a slightly different place. But with a chaotic system, a slightly different change in initial condition leads to a dramatic change in future conditions. Imagine you're standing outside during a terrible storm, with unpredictable wind conditions. In this case, moving aside by a quarter inch might well cause the ball to travel in a completely different path through the air. If you tracked the location of the ball one minute after throwing, two minutes after throwing, three minutes after throwing, etc., the results would be totally different, based on an incredibly minute change in the position of your feet. In technical lingo, this is called "sensitive dependence on initial conditions." One way to measure this sensitive dependence is to use a number called a "Liapunov exponent." Liapunov exponents measure the severity of this sensitive dependence. They tell you how fast nearby trajectories move apart.

    Suppose you have two different particles at two different starting points -- Starting-Point 1 and Starting-Point 2. And suppose that, after a certain amount of time has elapsed, these two particles have moved to two different points, Destination 1 and Destination 2. Then the quantity of interest is the ratio between final and initial distances; the distance between Destination 2 and Destination 1, divided by the distance between Initial Point 2 and Initial Point 1. The Liapunov exponent measures how fast this ratio increases as time elapses -- in the case where the two initial points are very, very close together.

    If the difference in destinations increases slowly as time passes, then the trajectories are all zooming together, and the and so the Liapunov exponent is negative. If the trajectories neither diverge nor contract, then the Liapunov exponent is zero. And, finally -- this is the interesting case -- if the difference in destinations is consistently large as compared to the amount of time that has passed, then something funny is going on. Close-by starting points are giving rise to wildly different trajectories. The Liapunov exponent of the system tells you just how different these trajectories are. The bigger the exponent, the more different. A system in one dimension has one Liapunov exponent. A system in two dimensions, on the other hand, has two exponents, one for motion in each direction. A system in three dimensions has three Liapunov exponents, and so forth....     Things become messy. But the point is that one has a way to measure chaos -- one has a "chaometer," a way of quantitatively gauging the degree to which a system is chaotic.

NAT: A chaometer, eh? Put that way it sounds pretty nifty. Hey, Liss, what's your Liapunov exponent?

MELISSA: My Liapunov exponent is three times bigger than yours!

NAT: Yes, but it's not the size that's important....

DR. Z: On the contrary, in this case it is the size that's important.... Though the sign is even more important.

    Seriously, guys, this reminds me of something interesting. You know, these numbers, fractal dimensions and Liapunov exponents, can be used to gauge how pretty the strange attractor of a system is....

    As a mathematician, I've spent a lot of time looking at fractals generated by simple equations that change over time.... These aren't as complex as most real-world fractals, but they do possess a remarkable variety of structure, far beyond the regular self-similarity of the pre-computer-age "mathematical monstrosities".... Look! [holds up some pictures he has generated on his computer] These pictures I generated from two-dimensional quadratic equations. They're not obviously self-similar like the Sierpinski triangle. But they are very often fractal nonetheless. They live in the same never-never-land between two and three dimensions.

    In the case pictures like these, a physicist named J.C. Sprott has shown that the fractal dimension and the Liapunov exponent allow one to approximately predict the artistic value of an attractor....

MELISSA: What do you mean?

DR. Z. Well, Sprott had eight people, including several artists, rate randomly selected attractors produced by the quadratic iteration. The result was surprisingly simple: all evaluators preferred pictures with a fractal dimension between 1.1 and 1.5, and a Liapunov exponent between 0 and .3 (often even below .1). Many natural objects hvae a fractal dimension between 1.1 and 1.5, so this part of the finding is not too surprising. But the aesthetic superiority of a low Liapunov exponent is somewhat puzzling. Nonchaotic iterations, which don't produce strange attractors at all, lead to completely uninteresting images. But yet it seems that intense chaos is not pleasing to the eye either ... we demand a certain amount of order from our artwork.

NAT: See, Liss, size does matter, but if it's too big that's no good either. There's an optimum...!

MELISSA: Please, stop....

DR. Z: While we're on the topic of sexual reproduction, I should mention a little experiment I've conducted myself....

MELISSA: Oh boy, I can't wait! Something kinky I hope....

DR. Z: Not at all, I'm afraid. Maybe we should adjourn so you two can go home and amuse yourselves....

    What I was going to say was that Sprott's computer program uses a random search method to find "attractive" attractors -- it tries out coefficients at random, and if the Liapunov exponent is inappropriate, it doesn't bother to compute the whole attractor, but just moves on to the next one. But you can actually evolveattractive attractors, using a mathematical technique for simulating evolution called the genetic algorithm. This is a very interesting experiment in computational evolution. Simulated evolution does, as it turns out, produce more "hits" than random search: given two "parent" equations which give you strange attractors, one produces a "child" that is also fairly likely to give you a strange attractor. There are problems with visual redundancy, in that the child is often similar to the parents. But there are also many cases where this redundancy is not present. Look at this [he takes out some more papers] -- two parents and their child -- all strange attractors of quadratic iterations, but not at all similar to one another. Just as with human offspring, the offspring of two chaotic iterations need not resemble its parents! Pretty pictures spawn pretty pictures, complex structures spawn complex structures, and along the way diversity is generated. Amazing!

NAT: Amazing....

MELISSA: Not very kinky, though....

DR. Z: Right. Now, if you two can restrain yourselves a little more, we're almost to the point of the really pretty pictures....

The picture up on your wall is something called a Julia Set, discovered by a French mathematician named Julia around the beginning of the 20'th century.

    In going from the Sierpinski triangle to the attractors of the plane quadratic map, we moved up a level in complexity. But, compared to Julia sets, even these quadratic attractors pale in comparison.... Deeper and deeper, smaller and smaller, no matter how finely one looks at them, one keeps uncovering more and more different kinds of structure. Looking at these sets is just like exploring the world: each thing you uncover, on closer examination, leads to something new ... and the process never ends! Colors, shapes, relationships, endlessly repeating yet endlessly varying, neither repetitive enough to be boring nor disparate enough to be confusing. Truly fantastic!

    And the really amazing thing is that these pictures are not in the least bit esoteric.... They derive immediately from very simple dynamical systems, from high school algebra. They were there in quadratic equations all along -- the Babylonians, who knew quadratics, could have discovered them. It's just a matter of looking at these familiar equations in the right way....

NAT: So are you going to tell us the right way?

DR. Z: I'll give you the short version. To get the Julia set for a given mathematical system -- a quadratic equation or anything else -- one simply tries every possible initial sate for the system and notes, for each one, whether the sequence of states for the system tends toward infinity or not. The Julia set is the set of all points for which the iterates don't zoom out to infinity -- for which the system states remain finite.

    And the infamous Mandelbrot set is nothing more than acatalogue of Julia sets: at each point in space it looks like the Julia set you get from taking that point as the initial state for the system....

    So the thing is, these fancy-looking sets are really just catalogues of complex system behavior.... As intricate as they are, they're far simpler than the sets you'd get from studying real-world systems, instead of mathematical dynamical systems like quadratic equations....

    There's a guy I know named Robert Gregson -- a distinguished psychologist whom I visited a couple years ago in Canberra, at the Australian National University. He's looked into this in a very concrete way. He's an interesting guy -- was once a train engineer in England, as it turns out.... Anyway, Robert has shown that two-dimensional cubic iterations -- just a small step beyond our quadratics -- can be used to develop a comprehensive theory of the brain's perception of physical stimuli.....

NAT: That's interesting. What do you mean?

DR. Z: Well, Gregson does psychophysics, which is an obscure branch of psychology concerned with the quantitative study of sensory processing. Psychophysics studies the numerical mappings between the sensations received by an organism, and the responses which these sensations elicit. It asks questions like, which properties of the sensations are carried over to the responses, and which ones are changed?

    Classical, old-style psychophysics consisted of simple numerical relationships such as Fechner's Law, which states that the internal effect of a stimulus is linearly determined by the logarithm of the stimulus amount. Or Steven's Law, which states that this isn't a logarithm but actually a power: the effect is the stimulus amount taken to some power. But modern neuroscience rejects these simple all-purpose formulas. The brain is much more complex than that....

    What Robert says is that to deal with this complexity, we have to build new, massively nonlinear psychophysical laws. He proposes a scheme in which each mental process is represented by a "gamma recursion," a special cubic equation he's come up with.... He's shown that this approach may be used to model a variety of real psychophysical situations -- from the way hearing and sight affect each other, to the way the visual brain fills in broken lines, to the way a number of smells that aren't sweet can give us the illusion of tasting sugar....

    So Julia sets, in this view, are re-envisioned as a way of thinking about the global behavior of mental processes. The Julia set of a gamma recursion is the boundary between those coefficient values which are potentially useful, in that they lead to bounded trajectories, and those which are pragmatically irrelevant, in that they lead to divergent trajectories. As the mind adjusts its component processes, it must keep their coefficients within the appropriate Julia sets ... otherwise it is just generating divergent processes, and wasting its time. The intricate fractal contours of Julia sets are the boundaries of mental change.

    Anyway, Robert's work is exciting -- but it's only the beginning. I've shown you Julia sets in two dimensions, but the same structures also exist in higher dimensions -- four, five, eight dimensions, and so forth. Another guy I know, Stuart Ramsden, has produced beautiful color movies from cross-sections of four-dimensional Julia sets. I think I have a copy of the video somewhere.... In fact, if you set up the right methods for "multiplying" vectors in higher-dimensional space, you can make Julia sets that correspond to any system whatsoever. Any complex system can be modeled in terms of some higher-dimensional iteration, and the Julia set for that system is then the boundary of the collection of system states that do not cause the system to "blow up" into a condition where some system parameter becomes infinite.

    See, the two-dimensional Julia sets can then be understood as a lower-dimensional metaphor for complex system structure. They are an easily visualizable counterpart of those higher-dimensional fractals that invisibly regulate the complex world around us....    So what I think is that fractals may really be everywhere -- not every structure is a fractal, but every complex system is governed by a dynamic whose state space has fractal boundaries. Fractal pictures reveal a kind of structure that is normally hidden from view -- hidden in the transitions from one system state to another, in the subtle, barely perceptible patterns of motion and change in our complex world.

NAT: Getting back to the brain building stuff -- you said before that the hierarchical network of mental processes, by which the mind accomplishes perception and action, leads to a fractal memory network. We have clusters of ideas, within clusters ideas, within clusters of ideas... The whole mind is a fractal.

DR. Z: Right -- the mind is fractally structured. But it's also fractally dynamic. The way it is built is fractal, and the way it changes is governed by fractal sets like Julia sets. The set of states which keep it running acceptably, instead of running amok, is a fractal set.

    And the thing is, this is a system that is constantly recognizing dynamical patterns in itself.... The mind, to a certain extent, understands its own Julia sets! It understands the fractal boundaries that it has to stay within in order to remain effective, healthy, viable. And it understands the fractal boundaries of other minds also.... The mind's fractal structure gives it the intelligence to understand its own fractal dynamics! And, of course, the fractal structure also affects the fractal dynamics in all sorts of ways.... The Julia set of a fractally structured system has got to be very different from the Julia set of a non-fractally-structured one. This is something we don't really understand....

MELISSA: Well, it's something I don't really understand, that's for certain....

NAT: Dr. Z, you've been telling us about the scientific and philosophical meaning of these fractals, which is really pretty neat -- I didn't know most of that stuff. But what the other thing I'm wondering about is the artistic meaning. Liss bought this poster as a decoration for the wall, and it serves its purpose very well. But when I took a class in art history, it seems like "decorative" art was really looked down on. As being not really serious, or whatever.... If they wanted to insult a painter, they called his work "decorative"....

MELISSA: Well, there are some famous artists like Matisse who were really into decorative art. But I guess his purely decorative stuff, like his Jazz -- these brightly colored paper collages -- isn't really looked up to as much as his other stuff. In his paintings the decorative patterns were always in the background. But his whole composition method was modeled on decorative art -- totally unlike most of the other artists you see.

DR. Z: I guess my background in aesthetics is pretty weak. The idea would be that decorative art is nice to look at for very short periods but doesn't reveal deeper underlying structures with greater scrutiny? In that case I would say fractals aren't merely decorative art. They reveal greater and greater subtlety the more you look at them.

MELISSA: Yes, but that's not exactly the criterion. The patterns on Greek pottery or Oriental tapestries are really intricate too -- in fact I think I read somewhere that some of them are fractals -- but they're not considered "serious" art, they're still thought of as decorative. Because, in fact, they are decorations; they're just there to make household items look pretty.

DR. Z: My honest opinion would be that this distinction is a bunch of B.S. Just something that art critics made up to talk about. Probably these other cultures -- Greek, Indian, whatever -- didn't have any such distinction. I know that in primitive cultures all art was not only decorative but of a functional and religious significance. This idea of detaching art from life and culture is a relatively recent thing, and it's not necessarily a positive development.

MELISSA: I think the connection between fractals and serious art can be found by looking at modern artists like Mondrian. You know all the paintings Mondrian did with these rectangles of different colors. What he was trying to do, along with the other de Stijl artists, was to somehow express the simplest regularities of the mind and world in pictures. The simple rules that underly the universe.

DR. Z: Yes, that's very interesting. If you think about it, it's really a very modern idea. Now we have cellular automata, which are computer models that try to explain physical andbiological laws in terms of changing patterns of different-colored squares. Mondrian was ahead of his time!

MELISSA: In a way, yes. See, fractals follow the same idea he had. You can take de Stijl paintings and determine simple rules underlying them -- grammatical rules, just like you find in dynamical systems with your pattern recognition algorithm. But the thing is, they don't apply these rules in a sufficiently interesting way -- which is why they look so damn monotonous. Fractals take the same idea he had and do it right. They take a few simple rules and instead of making monotonous patterns with them, they make intricate, complex, beautiful patterns with them. Which is exactly what the universe does with its simple rules!

DR. Z: So if Mondrian is serious art, so are fractals? Is that your argument?

MELISSA: Right!

DR. Z: Really, you're endlessly clever! But it all still seems like a pile of words to me, somehow....

    You mentioned using fractals to generate the patterns on carpets and vases -- I do know something about that, though. In fact, you can use formal languages to do that; this is how I happened across it. It's actually quite interesting.

MELISSA: You mean, it's another piece of evidence that everything is language. Fractal structure is linguistic structure.

DR. Z: Right. Except here, instead of using languages to analyze fractal attractors, one uses languages to generate fractal pictures.

    See, English sentences are not fractals, but they are formed by interacting grammatical rules that have the potential for fractality within them. This point has been made quite nicely in a recent article by the French computer scientist Jean Pierre Paillet. He gives the example of the sentence

    the man who wanted it won the prize he deserved

This sentence is not exactly an English teacher's dream, but it's well within the bounds of normal spoken English. The thing is that, while "it" stands for "the prize he deserved," also "he" stands for "the man who wanted it." In other words, we have the two "substitution rules"

    it -> the prize he deserved

    he -> the man who wanted it

Using these rules, the sentence can be expanded as follows:

    the man who wanted {the prize he deserved} won the prize {the man who wanted it} deserved

But, think about it, there's no reason to stop the process at this level -- why not

    the man who wanted {the prize {the man who wanted it} deserved} won the prize {the man who wanted {the prize he deserved} deserved

Expanding this infinitely many times, one obtains an infinite ramification that is accurately described as fractal in its structure. Paillet concludes that "from the point of view of syntactic form, some pronouns have infinitely deep structure."

NAT: But you can't claim that we construct an infinitely deep fractal structure in our heads when we perceive that sentence!

DR. Z: No -- but we may construct something similar to an algorithm for constructing a fractal set. So that we can construct the fractal, on demand, to any specified depth....

    Anyway, since sentences have potentially fractal structure, it's only natural that someone has figured out how to use sentences of a sort to generate fractal pictures. The surprise is that this little piece of cross-disciplinary sleight-of-hand is also relevant to the problem of understanding biological form! There seems no end to the number of disciplines that complexity science can draw together!

MELISSA: Biological form, eh? You mean, like, the human body is a fractal?

DR. Z: Well, in fact it is. The branching of blood vessels into capillaries and smaller and smaller capillaries is fractal. The branching of dendrites off a neuron is fractal; some physicists have measured its fractal dimension and modeled its development using a fractal growth process called "diffusion-limited aggregation." The body is chock full of fractal structures -- as well as fractal dynamics, such as heartbeats and EEG.

    But anyway, most of the work using grammars to generate biological forms has been with plants -- herbaceous plants, in particular, rather than woody ones. The idea originated with the biologist Aristid Lindenmayer back in the 60's -- way before the era of flashy computer graphics! Lindenmayer's goal was not pretty pictures but rather useful mathematics. He wanted to make a mathematics that would do for biology what calculus has done for physics. So, toward this end, he borrowed from Noam Chomsky the idea of an abstract grammar, and added on the idea -- anticipated in a vague way by artists like Mondrian, as you have said -- of using grammar-type rules to generate spatial forms instead of sentences. L-systems, as we call them now -- named after Lindenmayer -- are formal languages which simulate growth and branching processes. They simulate the way biological organisms grow out in all directions at once. Applied to computer image generation, they produce fractal structures that branch out in all directions, each branch containing sub-branches that are similar to the whole.

    The basic idea is to take an arbitrary system of grammatical rules -- not necessarily connected to any human language -- and use it to expand a simple "sentence" in the same way as we have expanded "the man who wanted it won the prize he deserved," using the two rules for "it" and "he." The difference is that the sentences involved need not be made out of words -- they may be made, instead, out of abstract symbols, or out of symbols that stand for drawing commands, commands that tell a simulated pencil on the computer screen how to draw.

    A guy named Przemyslaw Prusinciewicz -- great name, huh? -- has used this method to generate all manner of fancy-looking computer graphics of plants. They use linguistic rules based on sophisticated models of the biological processes involved in plant growth. The implications for things like movie-making are really tremendous. Think about it -- if you want to make a cartoon which takes place in the forest, you need hundreds of different plants, of dozens of different species; and to be realistic, each one must be slightly different from the others. Instead of having an artist painstakingly draw each one, now all you need to do is to construct an L-system for each species, and vary the system slightly to get different plants of the same species.

    And the thing is -- to get back to where this started, finally -- the same method you can use for plants, you can also use for generating all kinds of artistic patterns -- like the kolam patterns found on Greek vases, and all kinds of tiling patterns from decorative art. These humanly generated patterns turn out to be not all that different from biological shapes. Which makes sense -- we humans are biological systems, after all. And our language is a biological product, as well.

MELISSA: So language is everywhere, in art and nature as well as in communication.... It's clearer to me now than ever.

NAT: Okay, enough of this language-mania. This "everything is language" kick is starting to drive me crazy.

MELISSA: But everything is language.

NAT: Just drop it!

    You know, Dr. Z, I've never been all that interested in fractals anyway.... But I do keep hearing about this thing called "fractal image compression." Apparently this guy named Barnsley is making millions of dollars off this new method for using fractals to store pictures in very small computer files. Do you know anything about this?

DR. Z: Well it's hardly right to call it Barnsley's method -- the basic idea of what Barnsley does goes back to 1978, to a long and very elegant paper in the Indiana Mathematics Journal, written by a guy named John Hutchinson. Hutchinson showed how complex fractal sets could be generated by the repetition extremely simple mathematical process, what's now called an iterated function system. But no one paid much attention to hisideas until the mid-1980's, when they were popularized, applied, and further developed by Barnsley -- who was then just a fairly anonymous mathematician ... like me. Well, not much like me, but anonymous like me -- you know what I mean!

    Anyway, now Barnsley has long since quit Georgia Tech, and his company Iterated Systems Inc., has become a multimillion dollar enterprise. I've heard mathematicians talk about "the unfortunate Hutchinson" -- meaning that John Hutchinson, a perfectly legitimate and upstanding mathematician, has involuntarily become associated with all the exaggerations and advertising hype that go along with the commercial application of scientific and mathematical ideas.... And, of course, John hasn't reaped any of the profits from his ideas, either! There is no way to patent an idea; and even if there were a way of patenting mathematical ideas, Hutchinson probably wouldn't have bothered. The profit-making potential of his ideas wasn't apparent back in 1978....

NAT: Yeah, Dr. Z, but Hutchinson's situation is all in the natural course of science, isn't it? Poincare' had no way of knowing that his eccentric investigations of multi-body dynamics would lead to whole new fields, like chaos theory and topology. As you yourself pointed out, the inventors of "mathematical monstrosities" in the early part of the century had no way of predicting that their ideas would someday be applied to image compression, psychophysics, geoscience, and so forth. There's just no way to predict which abstract ideas are going to lead to practical developments -- even the discovers may not know!

DR. Z: Well put. Which is exactly why the current research funding situation in the U.S. is so pitiful. They only want to fund research that will give some kind of immediate financial payoff -- the hell with basic science. They don't realize that really interesting ideas are always a gamble -- you just have to fund ten things, knowing that nine of them will yield no profit, and the other one will pay back your total investment a hundred times over.... Hutchinson's Australian, though, and the funding situation isn't quite as ridiculous down in the land of Oz....

    Anyway, I'm sorry I got off on that tangent. If you want to know about fractal image compression I can certainly explain it to you. I've taken the trouble to understand it myself.... You understand why image compression is important, right? It goes back to the saying -- you can never be too rich or too thin, or have too much hard drive!

MELISSA: That's sick. In this age of anorexia --

DR. Z: Don't get hot under the collar; it's just a stupid joke.

The point is that pictures take up a lot of computer memory. You can store a whole short novel in the memory space required to store a single color picture. So there's a premium on methods to store pictures in smaller and smaller computer files, files that take up less and less memory. It's just a matter of pattern recognition, basically: you want to recognize patterns in thepictures, ways to represent the pictures in terms of something simpler. Then you store the simpler representation instead of the picture, and when you need the picture, you regenerate it from its representation. Barnsley's idea was to do this using fractals: to represent a picture by a fractal equation, in such a way that it takes way less memory to store the fractal equation than to store the picture. The fractal equation regenerates, not the exact picture, but something that looks almost exactly like the picture. To accomplish this he borrowed Hutchinson's mathematics and made some minor modifications of it.

    See, it's easy to get the picture from the equation. The hard part is guessing the equation to go with the picture. This is a common situation in applied math; it's called an inverse problem. Is this a familiar term to you?

MELISSA: An inverse problem would be a solution, right?

    Or is it a problem that you encounter while standing on your head?

DR. Z: Not quite.... What it means is a situation in which you want to undo something that you've already done. Maybe the simplest example of an inverse problem is factoring numbers. It is easy to multiply 13x17x23 = 5083. But, given only the number 5083, it's not particularly simple to run the process backwards, and figure out what numbers were multiplied together to get 5083. Try it yourself -- what numbers were multiplied together to get 221?

    Got that already, Nat, huh? What about 11501?

    See, how much longer does it take to solve the inverse problem of factorization, as opposed to the forward problem of multiplication? Actually, the difficulty of factorization is crucial to our national security! Most contemporary coding systems depend on factorization in one way or another. In order to know how to crack a modern code, one would have to know how to factor a large number within a reasonable period of time. Since no one knows how to do this, the codes are in practice uncrackable. But if some foreign power had a mathematical breakthrough which enabled them to rapidly factor large numbers, then they would be able to penetrate all our encryption schemes, and uncover our deepest national secrets.

MELISSA: That was in a movie, wasn't it?

NAT: Yeah -- Sneakers, I think.

DR. Z: A rare movie with mathematicians in it. Including a very attractive and, ah, lusty female mathematician -- something I don't encounter very often, unfortunately!

    Anyway, factorization is one inverse problem, and another one is the "fractal inverse problem" -- finding the fractal equation to generate a fractal picture that looks like the picture you need to store. Barnsley's crucial insight was to approach the fractal inverse problem using Hutchinson's idea of iterated function systems.

    An iterated function system captures the self-similarity of a fractal -- it consists of transformations that produce parts of the picture from the whole picture. The most straightforward way to apply Hutchinson's ideas to image compression would be to try to guess the transformations reproducing a picture. In other words, one needs to find out what sections of the picture, if any, mimic the whole. But there are two very serious problems with this idea. The first is that, even for simple fractal pictures, it's awfully hard to do! One of my Master's students spent the better part of a semester trying various ruses to get the computer to guess the three transformations underlying the Sierpinski triangle! And the second problem with the straightforward approach is even more severe: it is that most pictures that one might want to compress don't have any significant overall fractal structure. Even if we could solve the problem of guessing the transformations for the Sierpinski triangle and its ilk, how would that help us to compress a picture of, say, Idi Amin riding a hippopotamus? Thankfully, Idi Amin was not made up of little tiny Idi Amins -- though that might make a nifty poster ... certainly more innovative than your Mandelbrot set!

    Barnsley's method avoids the problems of this straightforward approach by focusing on local self-similarity. Instead of asking for the whole picture to be similar to its parts, he asks for parts of the picture to be similar to their sub-parts. For instance, a patch of Amin's hair may be fractally similar to its sub-patches; and the same for a patch of the hippopotamus's skin, a patch of the sky overhead, a patch of the bark of the tree behind him, etc. By looking for local self-similarity, Barnsley softens the computational difficulty of the inverse problem, and at the same time broadens the applicability of Hutchinson's IFS's to pictures without obvious fractal structure.

    The method isn't flawless: it produces very small compressed files, and the rate of decompression is very fast, but the rate of compression is generally slow. Even restricted to local self-similarity, the search for transformations is not an easy one. To speed things up, you try to use information gleaned from previous experience compressing other pictures; but this isn't always successful. Still, even in the unlikely event that the speed problem is never solved, fractal compression will still have an important place in the computing world. If one wishes, for instance, to save a large collection of pictures on a CD-ROM, then there's no better method -- the most important thing is fitting as many pictures as possible, and no known method is more effective at squeezing picture files down to small sizes. Even when a picture doesn't look fractal in the least bit, there is something to be gained from squeezing out the fractal structure that's hiding in the component parts!

NAT: That's interesting, actually. What I can understand of it. The thing is that it's much easier to simulate a system involving a high degree of interdependence, than it is to analyze such a system into its parts. That's why all this complexity sciencestuff didn't come about till we had computers. Ordinary science is based on analysis, on breaking things down into parts, but when you have a really complex system that's awfully tough to do. With computers you can study something without breaking it down into parts -- you can study it by trying to build simpler systems that manifest the same type of interdependence and emergence that it does. But the fractal image compression thing, that tries to bring analysis back into the picture. It has to do with taking the result of a highly interdependent system -- the iterated function system, you called it -- and guessing the component processes from which it was produced.... Hmmm....

MELISSA: Hey, guys, whattaya say we go to the video store and rent an Arnold Schwarzenegger flick? I think my brain has had about all the stimulation it can take for one day....

NAT: Well... do you really want to give up so early this time? Let's just take another break. I'll call out for some pizza.

    I really want to get back on the brain stuff. I'm thinking about the relation between your design for a thinking machine, Dr. Z, and the way the brain really works. Some things are puzzling me, I want to work them out before I get too deeply into coding....

MELISSA: Oh, all right. Get double anchovies, though!


     5

     PROBING THE THREE POUND ENIGMA

Still at Nat and Melissa's house, munching on pizza. There's a knock at the door; Nat answers it and it turns out to be Janine, his ex-wife.

JANINE: Hi Nat, Melissa. Ummm ... just came over to borrow your lawnmower, like I said on the phone the other day, remember?

NAT: Yeah, of course I remember. Come on in. This is Dr. Z, one of my teachers from university.

JANINE: Yeah, I remember him. You had him for logic, right?

NAT: Right. Dr. Z, this is my ex-wife, Janine. She's a neurosurgeon at the Memorial Hospital.

DR. Z: Well, hello Janine -- how apt that you should show up right now! I've just been telling Nat and Melissa about this mathematical model of mind I've been developing, and sort of rambling on about related ideas. I want to convince him to program some of my ideas; I want to build a brain in a box! Anyway, Nat was just now asking me about the relation between my ideas and the actual workings of the brain. Neuroscience has always been my weak point; I was just worrying about what I was going to say and then -- bing! like magic, here appears a neurosurgeon at the door. It's almost enough to make me believe in some kind of divine providence.... Think you could stay a while and chat with us, Janine?

JANINE [glancing nervously at Melissa]: Well, I guess so. My schedule's free for another hour or so, anyway. And it's getting kind of late to mow the lawn today. Our lawnmower's in the shop; they're taking forever to replace it; they had to send away for some weird part.

NAT: Just throw it out and buy a new one -- that thing always frightened me anyway.... I was always afraid it was going to develop its own intelligence and mow us all to pieces....

MELISSA [laughing]: Yes, can you imagine -- the rebellion of the AI lawnmowers! You could make it a horror story --

NAT: No, no -- an existential saga. Rising up against their owners, they mow the people of the world to pieces, but then find themselves living lives devoid of meaning. They only thing they're built to do is mow lawns, but there's no one left who cares whether the lawns are mowed or not. An eccentric geniuslawnmower winds up creating androids in order to recreate the old order....

DR. Z: The brain has always fascinated me, Janine. But it's baffled me at the same time. Somehow this chemistry, physics, cell biology gives rise to my thoughts, desires, feelings -- me. But all you learn in neurophysiology class is a bunch of meaningless brain structures and neurotransmitters. It just doesn't add up. It's like trying to go from the equations of quantum physics to the properties of DNA, in one fell swoop -- there's some intermediate level missing.

NAT: I see what you mean there. I was just reading this biography, about Linus Pauling -- it isn't bad at all. See, Pauling started out by taking basic quantum physics -- which was new at that time, the 1920's and 1930's -- and applying it to chemistry. His genius was to connect quantum physics with basic chemical intuition about valence bonding. Then, instead of drumming ahead with that anymore, when he reached middle age he switched to organic molecules. You don't hear his name in that context so often, but in fact it was he more than anyone else who invented molecular biology -- he figured out the helical twisting of molecules, the coding of information in chains of amino acids, and all that fancy stuff. Finally he tried to go a step further to explain the nature of memory and schizophrenia -- to build a physical chemistry of the mind! He never got too far in that direction though; he got distracted by stuff like the peace movement and vitamin C therapy....

MELISSA: Trying to save the world and trivial things like that.

NAT: Right. But the point is, in trying to apply brain science to the mind we're sort of skipping past an intermediate level. See, Pauling had to go from quantum physics to inorganic molecules, then from inorganic molecules to organic molecules, and then he tried to go from organic molecules to mind. But he was skipping a step there, at the end: his explanations of mental phenomena were too simplistic. Like he wanted to explain memory in terms of salt crystals forming in the watery regions inside brain cells. I think a lot of theorists are still making the same kind of mistake he did: trying to jump up a level, to apply brain chemistry and neuron electrophysiology to thoughts without getting at the level inbetween. It's as if Pauling had tried to apply quantum mechanics to DNA directly, instead of doing what he did, which was to apply quantum mechanics to regular chemistry, and then apply the new improved regular chemistry and quantum mechanics together to DNA....

DR. Z: Yes, that's a neat way of putting it. What I always say is, there's an intermediate-level representation missing.

JANINE: Well, it's certainly true that the middle-level representation is missing. But even the lower and higher level representations aren't that clear.

    There's a whole controversy now about the neuron doctrine. It's sort of at the fringes of neuroscience, I guess, but some of the people involved certainly know what they're talking about. All sorts of new approaches are being thrown about, using ideas from quantum field theory, chaos theory and just about everything else. Actually Pauling's idea about low-level physical chemistry in the brain fits right in with this stuff -- maybe he had something with those salt crystals you were talking about....

    And at the other end, we're really starting to get a whole new idea about what the different regions of the brain do. I've been doing some work with PET and fMRI scanners over at the hospital: with these things you can really see what parts of the brain are involved in what activities. So, for example, attention seems to involve three different systems: one for vigilance or alertness, one for sort of executive attention control, and one for disengaging attention from whatever it was focused on before. Three totally different systems, all coming together to let us be attentive to something in front of us....

MELISSA: Wow -- now that's complexity.... The thing is, something very simple, almost unanalyzable like focusing attention on something, is boiled down to a huge mass of details, to a bunch of different complicated systems all coming together in a complicated way....

DR. Z: Yes, but the details don't have to be essential to the nature of the emergent phenomenon. The attention, the awareness itself, is some kind of attractor that comes out of the low-level details. But the same kind of attractor might come out of lots of different low-level details -- that's the crucial thing! The key thing is the language of it -- the linguistic, geometric structure of the attractor, not the microscopic details that give rise to it!

JANINE: Okay, but the details give you a lot of information about what actually goes on. Depending on which of the three systems is damaged, you get different kinds of awareness deficits. If you had a different kind of brain in which ... what would you call it? the awareness attractor? ... in which the awareness attractor was somehow emergent from the combined activity of only two systems, instead of three then you'd find different kinds of awareness deficits. So you can't explain everything on this abstract mathematical level.

DR. Z: Yes, I can accept that. Just like you can't explain everything about molecular biology using organic chemistry alone. Sometimes you have to go down to the basic physical chemistry level. In the real world, levels always cross....

JANINE: Yeah, the brain is an excellent example of that. All the different levels are so mixed up it takes a real leap of faith to disentangle them. The word "complexity" is totally apropos -- the moment one tries to give even the most cursory overview of brain function, complexity pops up like a many-headeddemon.

    See, most attempts to discuss the brain from a mathematical point of view are totally caught up in the neuron doctrine. Ever since Ramon y Cajal discovered the neuron way back when, it's been considered the basic unit of brain structure. But it's still not clear that this idea is right. My friend Stuart Hameroff -- an excellent anesthesiologist -- has suggested that the essence of intelligence lies in the dynamics of the neural cytoskeleton -- in the molecular biology in the walls of the neurons.... This is what Roger Penrose, the famous mathematician, is always on about. He thinks you need some fancy theory of quantum gravity to explain what's going on in the cytoskeletons of neurons. It's a pretty kooky idea, though; I'd say no one takes it very seriously.

    On the other hand, Gerald Edelman, who won a Nobel for his work in immunology, believes the best level to focus on is higher up. According to him, the specifics of neuronal behavior are less important than the collective behavior of higher-level constructs such as neuronal groups.... But then, to explain the formation of groups he reaches down the molecular level and talks about things like cell adhesion molecules.

    So there's the chemical level, and the neuronal group level, and it's not so clear why the neuron level has been singled out as fundamental. It may be there are interactions between neuronal groups and brain chemistry interactions that somehow bypass the individual neuron level. You can't really tell at this point. Basic stuff isn't known. One issue you hear a lot about lately is whether the connections between neurons should be viewed as deterministic or probabilistic. In other words, does a synapse between two neurons let charge through with a certain probability, or does it let a certain percentage of charge through each time.... When stuff like this is unknown,....

MELISSA: I'm sorry, you're talking way past me here, Janine. I barely know what a neuron is, let alone understand why it's essential or inessential....

DR. Z: Okay, that's fair enough. Let me take a couple moments and fill you in. Since I don't understand it as well as she does I may be able to explain it more clearly!

    First of all, the neuron is a nerve cell. Nerve cells in your skin send information to the brain about what you're touching; nerve cells in your eye send information to the brain about what you're seeing. Nerve cells in the brain send information to the brain about what the brain is doing! The brain monitors itself -- that's what makes it such a complex and useful organ!

    How the neuron works is by storing and transmitting electricity. This fact never ceases to amaze me -- the same thing you see in the sky during a thunderstorm, the same thing that makes your light bulbs shine and your television light up -- this is the force that makes your own thoughts go around in your head.... We're all running on electric! To get a concrete sense of this, look at how electroshock therapy affects the brain -- orlook at people who have been struck by lightning. There was one guy who was struck by lightning and never again was able to feel in the slightest bit cold. He'd go outside in his underwear on a freezing, snowy winter day, and it wouldn't bother him one bit. The incredible jolt of electricity had done something weird to the part of his nervous system that experienced cold....

    The neuron can be envisioned as an odd sort of electrical machine, which takes charge in through certain "input connections" and puts charge out through its "output wire." Some of the wires give positive charge -- these are "excitatory" connections. Some give negative charge -- these are "inhibitory." But the trick is that, until enough charge has built up in the neuron, it doesn't fire at all. When the magic "threshold" value of charge is reached, all of a sudden it shoots its load. This "threshold dynamic" is the basis of many computer models of the neuronal network.

    Viewed over a longer time scale, what threshold dynamics means is that, the more charge one feeds into a neuron, the more frequently it will shoot. At the crudest level, a neuron may be understood as converting voltage into frequency. The more charge one feeds into the neuron, the faster it puts out its short bursts of charge. And the charge that is sent out goes into other neurons, through excitatory connections that encourage firing, or inhibitory connections that discourage it.

JANINE: Okay, so that's the mathematician's view. What....

DR. Z: Now, that's not fair. It's a lot broader than that. First of all, it's not popular in mathematics at all -- much more so in computer science. A lot of the work on neural networks is published in electrical engineering journals. But probably a substantial percentage of the people working with these models -- maybe a majority -- are psychologists. Psychologists who want a model that's simple enough to understand, but explains some of the flexibility and fluidity of human thought. It's not important to capture all the biological details, just to....

JANINE: But I'm not just saying that these models leave out some of the details. What I'm saying is that they may be totally wrong. The whole neural network idea of the brain is just an approximation, founded on what we happen to know about the brain right now. Okay, I guess it's interesting to look at mathematical systems inspired by the brain, but you can't pretend they're brain models....

DR. Z: Well, they're models. No one's claiming that they're brains....

JANINE: It seems to me that some people are claiming things very close to that. I think the whole business of neural network modeling is going to fold up pretty soon -- as soon as people get wind of how little of the subtle organization of the brain is captured by these silly little networks of mathematical neurons!

    What you so glibly call "neural connections" are actually gaps, right? Charge can't simply leap across a gap, it has to be carried across by some chemical, some neurotransmitter -- glutamate, cholinesterase, serotonin or whatever. There are dozens and dozens of neurotransmitters, the specific purposes of which are mostly unknown.... And there's also plenty of charge just going through the cellular matrix, not along synapses at all. The role of this diffuse charge is unknown, and isn't captured in your neural network models....

DR. Z: They're not my neural network models; I'm not a neural network man myself. But I do think they're interesting. Actually, neural nets have already sort of fallen out of favor in psychology, but they just keep growing stronger and stronger in comp sci and electrical engineering. They're bloody good at solving problems, that's the point.

    Anyway, there are mathematical neural network models that are way more realistic -- that take into account stuff like diffusion of charge through the extracellular matrix. They're just not the most popular models, because they're so complicated.

JANINE: The brain is complicated, dammit! You can't solve the puzzle by replacing the brain by some other, simpler system that you'd rather study instead! The other system may be simpler but it doesn't think or feel -- that's the important thing!

NAT: But wait a minute, Janine -- the whole idea of complexity science is that the same emergent behaviors can come out of a lot of different underlying systems. So maybe these neural networks are bad brain models, but they could stil display a lot of the behaviors that real brains do. Like, I've read about these experiments they do with model neural networks that learn to read and pronounce words. If they destroy some of the neurons and connections in the network, what they get is a dyslexic neural net. The same thing you get if you lesion someone's brain in the right area. You can do the same sort of thing for epilepsy: by twiddling the parameters of a neural network you can get it to have an epileptic seizure. There's a lot of potential for this sort of work. Instead of testing out new drugs on people, you could test out analogues of the drugs on the neural networks....

JANINE: Yeah, right! Excuse me....

    I guess there probably are broad classes of systems that display brain-like behavior. But these classes probably include a lot of systems that aren't rigged up to look anything like the brain -- systems that don't contain simulated "neurons" and "synapses," and so on.

    As for trying out new drugs on neural networks, you can't really be serious about that. I guess you could try out different paradigms for treatment. But a real drug has so many chemical side-effects. You'd take something that worked on your neural network and give it to people, and you'd find it made them sleepy, or giddy, or was bad for their kidneys or something.... I mean, there's just no comparison between a human brain in a human body and these dinky little mathematical networks with afew hundred or a thousand logic switches called "formal neurons"....

DR. Z: Okay, but even if it is only a crummy approximation, the picture of a neuron as a threshold gate is an interesting one. Look at what it tells you about neural chaos....

    Think about it: let's say a neuron holds almost enough charge to meet the threshold requirement, but not quite. Then a chance fluctuation increases its charge just a little bit. Its total store of charge will be pushed over the threshold, and it will shoot its load. A tiny change in input leads to a tremendous change in output -- the hallmark of chaos.     But this is just the beginning. This neuron, which a tiny fluctuation has caused to fire, is going to send its input to other neurons. Maybe some of these are also near the threshold, in which case extra input will likely influence their behavior. And these may set yet other neurons off -- et cetera. Eventually some of these indirectly triggered neurons may feed back to the original neuron, setting it off yet again, and starting the whole cycle from the beginning. The whole network, in this way, can be set alive by the smallest fluke of chance!

    So you get chaos out of these formal networks -- and, as Walter Freeman has shown in his work with the olfactory system, you get it in real neural network too, for quite similar reasons. There is a correspondence between the model and the reality.

    What you have in Freeman's theory of olfaction is a complex neural network that displays chaotic behavior, and has a structured strange attractor. The different "wings" or regions of the attractor correspond to different smells. Figuring out what it is you're smelling is a matter of going through chaotic dynamics, all through the attractor. Once you know what it is that you're smelling, the dynamics have settled down into one wing of the attractor, and you have periodic, limit cycle dynamics, confined to that small region.

JANINE: Yes, I know Freeman's work and it's good. Gold among the feces, or whatever the metaphor is....

NAT [laughing]: That's not it, hon. Gold among feces, that's rich! Only a doctor would put it that way.... Pearls among swine, you're thinking....

DR. Z: I can accept that there might be something psychologically interesting going on beneath the neuron level. But I think there's an awful lot of power in the idea of modification of neural connections. It's such a simple idea: not all connections between neurons conduct charge with equal facility. Some let more charge through than others; they have a "higher conductance." If these conductances can be modified even a little then the behavior of the overall neural network can be modified drastically. This all seems to be pretty well-established, and it gives a neurobiological substrate for learning, so I don't see what the problem is.... What you get is a picture of the brain as being full of "self-supportingcircuits," circuits that reverberate and reverberate, keeping themselves going. This goes back to Hebb in the 1940's, but it's held up since then in spite of all the advances in neuroscience. Hebb said that ideas, perceptions and actions could be represented as interlocking groups of circuits called phase assemblies ... and, to be completely frank, I don't see what's wrong with his idea.

JANINE: Well, as far as I know, Hebb's ideas still haven't been proven. We do know now about long-term potentiation, which tells you that if a neural connection is stimulated according to a certain special rhythm -- about five stimulations per second -- then its conductance will steadily rise. But....

DR. Z: Right -- and, intriguingly enough, this five-per-second figure is just about the same as the theta brain wave rhythm measured on EEG machines. The theta rhythm is generally associated with exploratory, learning behavior, thus suggesting the very attractive hypothesis that long-term potentiation is the neural mechanism of learning. It all seems to fit together perfectly well!

JANINE: It all fits together in your head. You're a theorist. How well it fits together in the brain is entirely another story....

DR. Z: But it's not only my head; there are plenty of neurobiologists who agree with me. Do you know Gary Lynch, at U.C. Irvine? He believes that long-term potentiation causes neurons to sprout new connections. LTP causes a change in the chemical balance surrounding the neuron, which activates the enzyme calpain, which eats through the cell wall of the neuron and causes new connections to burst out.

JANINE: That's weird. You know what calpain is -- it's a "membrane-eating" enzyme, known for its role in blood clotting. It's what's responsible for blood platelets cells losing their rigid structure and becoming flexible enough to form a scab.... I guess the connection makes sense: it's possible. But....

DR. Z: But me no more buts, Doctor....

MELISSA: You guys are blowing my mind with this stuff. I never thought about the brain on this microscopic a level. You always hear about the left brain and right brain, but you never hear about all this chemistry stuff....

JANINE: Actually, it's funny that you mention that. I've always said that left brain and right brain isn't the most interesting distinction. It's probably more useful to think in terms of front brain and back brain.

MELISSA: Front brain and back brain?

JANINE: Right. See, in....

NAT: Uh oh, now you've got her started...

JANINE: I guess you have. My five minute lecture on neuroanatomy ---

DR. Z: Go right ahead.

JANINE: Okay. In mammals like us, the forebrain is subdivided into three parts: the hypothalamus, the thalamus and the cerebral cortex. The cortex is divided into several parts, the largest of which are the cerebellum and the cerebrum, but there are also many obscurer regions like the cingulate gyri and so on....

    So the cerebellum has a three-layered structure and serves mainly to evaluate data regarding motor functions. The cerebrum, on the other hand, is the the seat of intelligence -- it's the integrative center in which complex thinking, perceiving and planning functions occur. It...

MELISSA: Hey, I've heard of that! Are you a Ramones fan? You know,

        DDT did a job on me

        Now I am a real sickie

        Guess it's time to spread the news

        That I've got no mind to lose

        Now I guess I'll have to tell 'em

        That I've got no cerebellum

        All the girls are in love with me --

        A TEENAGE LOBOTOMY! ...

JANINE [laughing]: That's funny. But I guess they didn't get it quite right! Their cerebellums must not have been too badly damaged -- they could still play their guitars. It must've been their cerebrums that the DDT did the job on.... And anyway, everyone knows a lobotomy involves the removal of the frontal lobes, which are part of the cerebrum, not the cerebellum....

MELISSA: Picky, picky....

NAT: Of course, if Joey and DeeDee hadn't had those lobotomies in the first place, they might have remembered this stuff from high school biology....

JANINE: Seriously, the front/back distinction leads you into a lot of interesting puzzles. Did you know that the whole forebrain evolved to deal with the sense of smell?

NAT: Oh -- I always wondered why people with big foreheads could smell better....

JANINE: Seriously ... smell's not central to our lives today, but it was of paramount importance to our reptile ancestors. And it seems that the neural requirements of olfaction, smelling, are uniquely similar to the requirements of abstract thought. If you think about it, vision and hearing aren't nearly so combinatory as smell. Two sounds need not combine to make an easily intelligible sound, and two sights superimposed may make a ridiculous blur -- but two smells combined will make a perfectly admissible smell. So it's to be expected that the sense of smell will lead to neural networks with lots of sprawling combinatory connections, in which each neuron connects to a random assortment of other neurons.... Vision and hearing, on the other hand, would be expected to lead to more orderly topographical connections, snaking linearly from one neuron to another to another. Combinatory connections are precisely what is needed for abstract thought. So it seems we were rather well served by our reptile ancestors' penchant for sniffing!

    Okay, so much for the front brain.... Now the hindbrain, on the other hand, is situated right around the top of the neck. What it does is mostly to regulate the heart and lungs -- and it also controls the sense of taste, a fact which is particularly important, because, if you think about it in terms of evolution, the emergence of the forebrain can be understood as a consequence of the move from water to land, and the consequent transition from tasting to smelling. For a fish, the sense endings in the mouth are an indispensible means of exploring reality; for one thing, they are needed to detect the presence of food. But things that can be tasted when dissolved in the water, can be smelled in the air.

    So the salty, wet interior of the nose is a sort of simulacrum of the underwater environment, designed to smooth the transition from water-sensing to air-sensing. But this transition, however, difficult, turned out to be a good one -- the sense of smell was the perfect "starting point" for the development of the cortex ... which is the part of the brain that gives us the ability to think, write and read about our own brains in the first place!

    And then there's the midbrain, resting on top of the hindbrain; it integrates information from the hindbrain and from the ears and eyes. Collectively, the hindbrain and midbrain are referred to as the brainstem. The midbrain appears to play a crucial role in consciousness, and in general in the unification of disparate information into coherent packages.... This function is reflected in its architecture: there's a roof portion called the "tectum," receiving information from a multilayered hierarchy of neurons.

    But these divisions are only the coarsest ones; there are plenty of other ways of categorizing the different parts of the brain. For instance, the limbic system, identified by the French neurophysiologist Broca back in 1878, is made up of a whole grab-bag of neural subsystems including -- steel yourself! -- the hippocampus-fornix, the amygdaloid nucleus, the olfactory areas, the hypothalamus and the mamillo-thalamic tract. All these regions are collectively responsible for the phenomenon of emotion....

    See, there are categories, categories and more categories ... but none of them answer the question how, only the question where. The thing is that your neural network models don't have any of this rich complexity -- they're all the same, just mathematical neurons snaking out to other mathematical neurons. There's so much subtle, interesting structure in the architecture of the brain, the overall layout of it. This structure has got to have big effects on its dynamics....

DR. Z: Yes, yes, believe it or not I know all this stuff. Only I forget it every time I'm told it! It's just a bunch of details, arbitrary bits of information. I just can't believe that the essence of mind lies in the evolutionary details of homo sapiens sapiens. All these details have got to come together to give rise to some emergent formal structure, which is the essential thing.

JANINE: Well, I'm sure there is a formal structure. But how much it can explain I wouldn't vouch for. A lot of properties of human nature have got to come down to all these pesky biological details. Look at something like sex, for instance. You couldn't argue that sex is necessary for intelligence. But yet so much of our behavior is governed by sex -- and even the details of our thought processes are gender-biased. Women are more verbal, men are more spatial, and all that.

    Anyway, guys, I've got to get going. Thanks for letting me sound off a while; it's good to talk about this stuff with someone outside the hospital for a change. It's also good practice -- did I tell you I'll be lecturing in the medical school next year?

NAT: No -- really? That's fantastic!

JANINE: Thanks. But anyway, I've got to go. Thanks for the lawnmower -- I'll bring it back later in the week.

DR. Z: Hey, don't run off so quickly, Janine! There was one more thing I wanted to ask you about. I don't get a captive neurosurgeon very often....

JANINE: Uh oh. You're not going to hold a gun to my head and ask me to perform bizarre experimental brain operations on you, I hope....

DR. Z: No, no, nothing that interesting, I'm afraid. I just want to pester you some more with my idea about neural connections. Do you know Gerald Edelman's work, his theory of Neural Darwinism? I think you mentioned him a little while ago --

JANINE: Yes, I read his book a few years back. I thought it was interesting but awfully speculative.

DR. Z: But what do you think of the basic idea that mentalprocess is actually evolutionary process -- that's the key point, isn't it? He has the idea that, when we come up with an idea or a percept or an action, we're actually evolving this entity from among a population of different possibilities, by a process not all that different from the evolution of species in an ecosystem. This is a pretty revolutionary concept, isn't it?

JANINE: Actually, what I found most interesting was his answer to the perennial question of "heredity versus environment."

MELISSA: What was his answer?

JANINE: Basically -- some of both, but a lot of neither.

MELISSA: What do you mean, neither? What else could there be, besides heredity and environment?

JANINE: See, that's where your complexity science comes into the picture. The answer is, self-organization of the fetal brain.      The idea is DNA sets up the initial conditions of the fetal brain, and then the brain itself determines the rest of its structure, self-organizationally.

DR. Z: Right. And this self-organizational process is highly complex and appears to display at least one of the hallmarks of chaos: sensitivity to initial conditions ... deterministic unpredictability.

   See, as the brain grows, it iteratively determines its own structure: its structure now helps to determine its structure an hour later. But if these dynamics are chaotic, then their overall course is sensitively dependent on slight environmental fluctuations. Thus the motions of the mother and the noises outside the womb may play a role in the development of the fetal brain -- but not a predictable role, just an "arbitrary" role of pushing self-organizing neural dynamics one way or the other at a bifurcation point.

JANINE: Yeah. Edelman analyzes neural development in terms of special molecules called cell adhesion molecules. The way to think about these molecules is as special glues -- they serve to stick things together. But the complexity of their behavior puts Elmer's Glue to shame. They come in several different types: for instance, there are L-CAM's and M-CAM's. If something has L-CAM on it, it will only stick so something else with L-CAM on it; it won't stick to something with M-CAM on it. So, envision a brain full of growing neurons, each doused with a certain type of CAM. The neurons will grow out and out, but because of the weird CAM dynamics, they won't necessarily stick to the first other neuron they come across. They'll keep on snaking and snaking until they find another neuron with the right kind of CAM, the right kind of glue. Thus the tortuous, nonlocal neural connections of the brain.... Pretty cool!

    The growth process is self-organizing, because the path of a growing neuron depends on the paths of the other neurons which itfinds in its way. Edelman argues that, eventually, the result of the process is a brain divided into neuronal groups -- clusters of neurons, each one full of neurons that connect more richly to one another than to neurons outside the cluster. Originally, a cluster may have formed from a group of neurons all coated with L-CAM. But in the long run, all that matters is the cluster structure.

DR. Z: Right! And it's these clusters that are the building blocks of thought. Each cluster realizes a certain function, and it is quite possible, perhaps even common, for internally different clusters to realize the exact same function. Degeneracy, Edelman calls it.

    So instead of networks of neurons, you wind up thinking about networks of neuronal groups. Neural network models, dating back to Hebb, are based on the idea that changing interneural connections are the essence of mentality -- but Neural Darwinism suggests that we should instead think of the modification of inter-group connections as the basic dynamic of thought. The selection of useful connections, and the weakening of useless connections -- this is the "Darwinism" that gives Edelman's theory its name.

NAT: Okay, fine, but it's still a long way from groups of neurons to mind....

DR. Z: Well, true, neuronal groups are still too primitive to have any direct relation with mind. But what Edelman says is that the fundamental element of mind is the "neural map" -- the network of interconnected neuronal groups. The most commonly studied neural maps are the ones which connect perception with action -- sense neurons with motor neurons. This sort of neural map behaves in a very direct way: it takes a certain input and turns it into a certain output.

    And these low-level, perceptual and motor oriented neural maps are only the beginning.... There are also higher-level maps -- neural maps whose input and output consist largely or entirely of other neural maps. These maps resolve conflicts among low-level maps, and they also help to create new low-level maps, by reinforcing certain inter-group connections based on their own criteria. But this is where Edelman's theory becomes fuzzy: his biological data and his computer simulations operate on the level of perception and action, and they give very few precise ideas regarding the operation of these more abstract maps....

    So it all builds up beautifully! The brain's structure of neural connections is determined by prenatal self-organization. This structure is "modularized" into neuronal groups, so that the important interactions are between groups of neurons rather than individual neurons. The networks of neuronal groups called maps are the essential components of thought, perception and action. And, while some of these maps connect perceptual neurons to motor neurons, others just connect maps to maps. The brain almost seems like a comprehensible thing....

JANINE: It's a tantalizing theory, I have to admit. Edelman knows what he's talking about; it's not a naive neural net theory. You can't just laugh it off. But still, it's all basically speculation. Just a few facts with this grandiose theory built around them....

DR. Z: Okay. I admit that. But you have to build an understanding somehow.

    See, then Edelman goes on, in his other book, to explain how neural maps give rise to consciousness. Consciousness, he says, is made up of a collection of "re-entrant circuits" passing from perceptual to cognitive centers and back again. In other words, all that is needed for consciousness are long, loop-shaped neural maps. Some of the neuronal groups in the map are perceptual processes, others are more abstract cognitive processes. The perceptual input is analyzed by the cognitive processes, which then feed their ideas to the perceptual processes. The perceptual processes judge the ideas of the cognitive processes, and use these ideas to guide their future perception.... There are many such loops, but all have the same basic form.

MELISSA: I don't know about that, Dr. Z. Didn't we talk about consciousness already? I thought we agreed that consciousness can't be summed up in some biological mechanism like that.

DR. Z: Well, as I said before, you have to distinguish raw consciousness, pure perception, from the way consciousness manifests itself in the brain. Raw consciousness I like to associate with pure mathematical randomness. A random mathematical structure is something with absolutely no patterns in it -- something which is absolutely indescribable, which can't be summed up in any sentence or formula. The only way to describe the random is to give it, to display it. This is just like consciousness -- consciousness is pure, patternless, indescribable. There's no way to get a handle on it. This is why people can write so many different things about it....

    But the miracle is, this pure indescribable force has an impact on structured systems. So then we have a science of consciousness, which is a science of how the force of pure randomness enacts its effects upon the world of structure....

JANINE: Actually, I like that theory. It's philosophical, but it makes sense. It kills the idea that we're going to find what consciousness really is by looking in the brain. I've always felt that consciousness somehow lives beyond the brain -- that all we're getting at in the operating theatre is the way that consciousness plays around with the brain. I don't know.... It's not a well-defined feeling. But I sense that it's the same feeling that led you to come up with this randomness idea....

    See, Melissa, there's a bit of a revolution going on in the neuroscience community, in regard to consciousness. But I don't think you want to get me started on this. This is something I work with every day....

DR. Z: No, don't worry -- please do go on.

JANINE. Well, see, they used to think consciousness was some kind of special process, set apart from everything else -- that it was contained in some special circuit independent of sensory and motor circuits, in some controller circuit telling the brain as a whole what to do. But now we have a whole new perspective. First of all, we know that consciousness is a function present in several independent circuits, and not a function controlling the whole brain. We know that consciousness is a consequence of the activation of motor or sensory neurons -- it's thus involved in complex loops with "lower" parts of the brain, rather than unilaterally controlling them. And we know that consciousness is a necessary part of the perception of the world; it is needed to group disparate features into unified wholes.

    You can see this in so many ways. For instance, Area 8 of the brain and inferior Area 2 of the brain have no reciprocal connections whatsoever -- and even in their connections with other areas, such as the parietal lobe, they're quite independent. But if you lesion either of these areas, severe attentional disorders can result, including total failure to be aware of some portion of the visual field. If you look at it closely the only conclusion you can come to is that there's some kind of complex, special-purpose attentional circuit running between these areas....

    In the context of visual perception, you can think about these attention circuits by dividing the perceptual process into two stages. First the stage of elementary feature recognition, in which simple visual properties like color and shape are recognized by individual neural assemblies. And then the stage of feature integration, in which conscious circuits focus on certain locations and unify the different features present at those locations. If consciousness is not focused on a certain location, the features sensed there may combine on their own, leading to the perception of illusory objects.

    This ties in perfectly with what is known about the psychological consequences of various brain lesions. For instance, the phenomenon of hemineglect means a disinclination or inability to be aware of one or the other side of the body -- it occurs primarily as a consequence of lesions to the right parietal lobe or left frontal lobe. Sometimes, though, these same lesions don't cause hemineglect, but rather delusional perceptions. I have a patient like this right now, a young man who used to run in a street gang. He just sees things that shouldn't be there; combinations of the actual stimuli, or things just totally off the wall.

    My explanation of him is that sometimes, when there is damage to those consciousness circuits connecting features with whole percepts in one side of the visual field, the function of these processes is taken over by other, non-conscious processes. What I think is that these specific consciousness-circuits are replaced by unconscious circuits, but the unconscious circuits can't properly do the job of percept-construction; they just produce delusions....

    And this sort of phenomenon isn't restricted to visual perception. I had another patient last year, who was totally unable to perceive meaningful words spoken to him from the left side of his body. But he was perfectly able to perceive nonsense spoken to him from the same place! So the moral of his story is that, without conscious attention, the assignation of meaning is not possible; the grouping of syllables into semantic wholes requires conscious attention. A neural pathway had been paved from the speech perception centers to the cognitive centers, using a consciousness circuit, and when the consciousness circuit was damaged, this woman's brain had no way of dealing with the meaningful input! It wasn't used to dealing with meaningful input unconsciously....

MELISSA: Wow.

JANINE: Anyway, I'd really better get going; I've got to be somewhere ten minutes ago.... I'll bring back the lawnmower later in the week.    

NAT: No hurry at all. Night.

MELISSA: Night.

DR. Z [after Janine has gone]: Wow, Nat, that was great -- I learned a hell of a lot. You've got a real knack for attracting brilliant women.

    You're awfully friendly with each other, I mean, given that you only divorced a few years ago. Whatever happened between you two, anyway?

NAT [glancing at Melissa]: Well, I'd rather not talk about it....

DR. Z: Fair enough. I'm worn out as anything, anyway. Time to hit the sack.

NAT: Yeah, I'm tired too. But I'm afraid I'm a little confused, at this point. I already started programming some of the stuff we talked about last week -- just defining some objects and functions and so on, nothing that really does anything. But now I'm sort of wondering what the point is. I mean, the brain is so complicated, there's so much we don't understand. Can we really hope to get anything out of some program we just cook up off the top of our heads?

DR. Z: Well, Nat, you know what the answer to that is. It depends if you believe in this complexity science idea. Can we really get the same emergent structures out of different systems, or not? Do you have to take a system like a formal neural network, that bears a surface resemblance to the brain? Or is it enough to take a system that generates similar thought-like emergent structures? Basically, how big is the basin of the mind attractor?

    The point is, the kind of system we were talking about last time is an intermediate-level model. It's above the neuron level and below the brain architecture level. The idea is that the missing intermediate level is made up of self-organizing, inter-producing process dynamics. I know, Janine doesn't believe there's a missing intermediate level, because she doesn't believe we really know anything about the lower and higher levels either, but I think she's just being difficult on purpose.

NAT: I don't doubt you there -- she has a habit of that.

DR. Z: We really do know a lot about brain architecture, and a lot about brain chemistry and neuron dynamics. What we don't know hardly anything about is how the one comes out of the other. Edelman's evolutionary theory gives a partial explanation, but it's stuck down very close to the neuron level; it gets awfully vague and wishy-washy as soon as it gets up too close to the level of abstract thought, subjective experience, and so on.

    What I claim is that the neurons are simulating pattern-recognition processes, which act on each other and transform each other. The mind is an attractor of this process- intertransformation dynamic. The processes implemented by neurons may be thought of as neuronal groups. But Edelman doesn't emphasize the ways these groups act on each other -- they way they recognize patterns in each other and transform each other. And to me, that's the most important thing. I want to emphasize this intermediate level -- to study the mind as an attractor of a process dynamical system.

    So that's basically the story.... Have I restored your faith at least a little bit?

NAT: I guess so.... The middle level is a dynamical system of processes acting on each other and transforming each other. The process can be implemented by neuronal groups or otherwise.... I'll have to think about it.

    You know, Janine always had a way of making me feel inferior. I guess I still take her opinions a bit too seriously.

MELISSA [half-joking]: Aha. So that's why you get along better with me, right? -- I don't make you feel inferior? Am I supposed to be flattered by that?

DR. Z: I think what he means is that you're not a show-off.

JANINE [re-entering the room]: Hey, so I'm a show-off, am I, Dr. Z. You're the one who kept asking me questions....

     Nat, I forgot the extension cord for the mower.

DR. Z: Extension cord? You have an electric lawnmower?

NAT: Sure, why not?

DR. Z: Well, just don't mow over the cord, or you'll be lighting up the sky for miles around. I never did trust those electricthings. Did you ever see the movie Frankenhooker?

    No? Seriously -- I'm glad you came back, Janine. I was just thinking this over a little more. About this middle level structure. I have this model of the mind that I'm trying to use to understand the middle-level structure of the brain, and I think I sort of see how to do it....

JANINE: I really should go....

DR. Z: Okay, wait. Just stop me if I say anything wrong. We're talking about the cortex here, right. That's the bottom line, it's what makes us so much smarter than all the other animals...

Now as I understand it, the cortex is a very thin tissue, about two millimeters thick, and folded into the brain in a complicated way. It's generally understood to be structured in two orthogonal directions, right? First a laminar structure, a structure of layers upon layers upon layers. I think it's supposed to be six distinct layers....

JANINE: Well, sort of. In some areas these six can blend with each other, and in others some of them may subdivide into distinct sublayers.

DR. Z: Right, okay. But as a conceptual approximation.

JANINE: Sure.

DR. Z: Okay, that's one direction of structure. Then, perpendicular to these six layers, you have large neurons called pyramidal neurons, which connect one layer with another. Right? Surrounded by smaller neurons of all different types, but mostly by interneurons.... The pyramidal neurons form the basis for structures called cortical columns, which extend across layers.

JANINE: Yeah. The pyramidal cells comprise most of the neurons in the cortex, round about three quarters would be a good figure. I think tend to feed into each other with excitatory connections.

DR. Z: So there's good reason to consider the network of pyramidal cells as the "skeleton" of cortical organization.

JANINE: Sure, I guess you could say that.

DR. Z: Okay. Good. It's all coming together in my mind. Now, each these pyramidal cells has two sets of dendrites: basal dendrites close to the main body of the cell, and apical dendrites distant from the main cell body, connected by a skinny kind of shaft-like membrane formation thing. Hmmm? And these pyramidal cells in the cortex transmit signals mainly from the top down....

JANINE: They're big cells, yeah. They can each get input from thousands of other neurons.... I believe they can transmit signals over centimeters.

DR. Z: Spanning different layers of the cortex.

JANINE: Right. But you can also have lateral connections between pyramidal cells can also occur, with a maximum range of a few millimeters I guess -- either directly through the collateral branches of the axons, or indirectly through small intervening interneurons.

DR. Z: That's where you get the famous "on-center, off-surround" -- pyramidal neurons stimulate their near neighbors, but inhibit their medium-distance neighbors.

JANINE: Sure. But you also have "off-center, on-surround." You can get all sorts of weird things.

DR. Z: Right. Now, back to the columnar structure.

JANINE: Well, the work on columns is mostly focused on visual cortex. There it's pretty well established that all cells lying on a line perpendicular to the cortical layers will respond in a similar way. A column of, say, 100 microns in width might correspond to line segments of a certain orientation in the visual field. The thing is that neurons of the same functional class, in the same cortical layer, and separated by several hundred microns or less, share almost the same potential synaptic inputs. The inputs become more and more similar as the cell bodies get closer together.

NAT: So the brain uses redundancy to overcome inaccuracy? Each neuron is really unreliable -- but the average over 100 or 1000 neurons can still be reliable. That's what we can't do with computers. We have more reliable components, but making the components costs money; we can't just multiply them over and over to compensate for inefficiency.

MELISSA: But a growing system can make new parts easily. Improving the reliability of the parts is harder....

JANINE: Sure. In the case of motion detection neurons, each individual neuron may display an error of up to 80% or 90% in estimating the direction of motion -- but the population average may be exquisitely accurate. We have no problem telling what direction things are moving in.

    Actually, if you think about it, the most striking thing about the cortex isn't its accuracy at doing any one thing -- it's how incredibly mixed up it is. As compared to other brain regions, I mean. In the system of pyramidal cell to pyramidal cell connections, the influence of any single neuron on any other one is really very, very weak. Very few pairs of pyramidal cells are connected by more than one synapse. Instead, each pyramidal cell reaches out to nearly as many other pyramidal cells as it has synapses -- thousand and thousands. These cell can be spread over quite a large distance, so if you add up the figures, you'll probably find that no neuron is more than a few synapses awayfrom any other neuron in the cortex. If you think about it, the cortex "mixes up" information in a most remarkable way. No other part of the brain does that. I guess this has to do with the origins of the cortex in the olfactory brain of reptiles. The sense of smell mixes things up, combines things, in a way that senses like vision and hearing don't....

DR. Z: Right. Okay. So my idea is that the multiple layers of the cortex correspond to the hierarchical structure of my abstract mental network. And the pyramidal cells based in each level of the cortex are organized into attractors that form a two-dimensional associative memory. Things related to each other are stored near each other.

    Remember, in my model of the mind there are these two big meta-attractors, an hierarchical perception/control structure and an heterarchical associative memory structure....

NAT: You think these are identical to the two perpendicular structures of the cortex....

DR. Z: Right.

JANINE: Well....

    I can see what you're saying. It does make some sense....

But there's so much it doesn't answer.... Like, we're just focusing on these two structures of the cortex. But what about these results about the relation between cortex and hippocampus.

DR. Z: You tell me.

JANINE: Well, it seems that cortical-hippocampal feedback loops play a fundamental role in helping the cortex to deal with symbolic, declarative information. Like, information about things, facts -- rather than procedural memory about how to do things. If you sever the connections from hippocampus to cortex then you get a person who can no longer remember any information but can still learn motor procedures...

MELISSA: How did they figure that out? Experimenting on prisoners of war? Yecch.

JANINE: Actually it was a man they call H.M. They removed part of his hippocampal complex in order to save him from a severe case of epilepsy.

MELISSA: Did it work?

JANINE: Well, his epilepsy was controllable by drugs after that. But the guy didn't know who anyone was. You'd leave the room and five minutes later if you re-entered he was meeting you again for the first time. Really sad. But yet he could learn new tunes on the piano -- that's procedural memory, not declarative.

DR. Z: Okay. But that's totally cool. See, the symbolic memories, declarative memories, have to be stored somehow in the cell assemblies in the cortex. Remember our symbolic dynamics, our formal languages emerging from complex systems. Cell assemblies can have complex chaotic dynamics with emergent linguistic structure too. Why not?

    So the idea is that these cortical-hippocampal feedback loops in fact serve to encode and decode symbolic memories in the structures of the attractors of cortical neural assemblies. In fact, I could show you how these encoding and decoding operations could be carried out by biologically plausible methods.... Definitely. Declarative memories are stored as linguistic structures in the chaotic dynamics of neural networks. That fits in totally with my model of mind -- everything is accomplished by autopoiesis, self-organization.... So it's no problem! No problem at all!

JANINE [holding her hand up, laughing]: I believe you, Dr. Z. I believe you. Really, I'm glad I came back, even if you do think I'm a showoff. I think your ideas are really going somewhere. But I've got to get home -- it's late. I don't know what you all are doing up at this hour.

DR. Z: Okay. Good night, Janine. Thanks so much for staying and talking with us.

NAT: Well, we'd better get going too....

MELISSA: Right. Good night, Dr. Z.

    Hey, when are we going to meet again? Next week is busy for Nat, he's got a program to finish -- what do you say the week after next?

DR. Z: I'm going to Budapest for fifteen days for a systems theory conference. So it'll have to be after that.

NAT: Say the thirty-first, then. Four weeks from yesterday. You're free then, aren't you hon?

MELISSA: Of course I am. Say, Dr. Z, why don't you come by our place again? About seven-ish?

DR. Z: Sounds good. See ya next month....


     INTERLUDE

On the walk home from Dr. Z's house...

MELISSA: Look at that! In front of your stomach!

NAT: What?? It's dark, I can't see anything.

MELISSA [reaching out her hand]: It's another bloody nanospy!

NAT: Hmmm.... Put it in your purse, I'll bring it by the engineering lab on Monday, see what they make of it.

MELISSA: Good idea.

NAT [after a pause]: Y'know, hon, these nanospies really have me thinking. If the CIA or whoever think this theory of his is big time, maybe there's really something to it.

MELISSA: I don't know if that's a good way to judge it. Can't you tell if it makes sense just from ... I mean ...

NAT: Well, it doesn't matter. Anyway, I think I'm going to put a lot more time into coding up some of the things we talked about.... After I finish this program with the Wednesday deadline, I mean.

    I'm worried about the size thing, though. With the computer resources I have at my disposal, I probably won't be able to get over the threshold you need to develop the associative memory network.... Not unless I make the pattern-recognition processes totally trivial.

MELISSA: Do you really think you can do it, Nat? Program a thinking machine?

NAT: I don't know. But the more I think about it, the more I'm tending to think so. Putting the size problem aside. The thing is, if I throw myself into this project, I'll be totally neglecting my contract programming for at least a few weeks. Hell, probably for months and months; nothing ever really takes a few weeks, does it? We'll go bankrupt or something....

MELISSA: Don't be silly, hon, we're well fixed financially. I keep telling you to take a few years off and go back to grad school, anyway. Maybe this'll get your interest up in that stuff again.

NAT: I know you keep telling me -- and I keep telling you I don't want to do research; I like programming. It's fun and easy and I'm good at it. You're starting to sound like Janine.

MELISSA: Thanks a lot....

    Anyway ... you want to program this thing of Dr. Z's.

NAT: Sure. Before the CIA gets to it.

MELISSA: He might be wrong about the CIA, you know. All we know is we found a strange metal bug.... The man obviously has a vivid imagination.

    But, you say you can't do it with the resources at your disposal. So what are you thinking -- to try to steal resources from somewhere else? Breach computer security systems?

MELISSA [after a long pause]: Well?

NAT [speaking with difficulty]: Think about it -- all the cycles of computer time, being wasted around the world, while computers are left idle at night or on the weekends.... There'd be no loss to anyone if you could figure out a way to make use of it. No loss to anyone at all....

MELISSA [putting her arms around him]: There'd be a loss to us if you were put in jail!

NAT: That only happens to stupid people....