The Evolving Mind -- Copyright Gordon and Breach 1993

Back to The Evolving Mind Table of Contents



Now we are ready to begin our journey into the world of evolution. We will begin with Sir MacFarlane Burnet's theory of "clonal selection" _ a cornerstone of modern immunology. This theory has the biologically and philosophically delightful implication that the immune system, like the ecosystem, evolves by natural selection.

    This is intriguing in itself, and it is doubly interesting because, of all the systems known or suspected to evolve by natural selection, the immune system is the simplest _ the least self-organizing. Therefore, in the immune system, we come as close as we can to finding natural selection in its "pure" form, uninfluenced by other aspects of self-organizing dynamics.

    In this chapter we will discuss recent research by Alan Perelson (1988, 1990), Rob deBoer and their colleagues, based on the more general ideas of Jerne (1973) and others. From this research, it is becoming clearer and clearer that the simple fact of natural selection is not sufficient for explaining the nature of immune behavior. One must consider the immune system as a self-organizing entity.

    In particular, it seems that it is not at all adequate to consider the environment of an antibody as something independent of other antibodies. And it seems that the way the immune system remembers which antigens it has seen before _ and hence the long-term dynamics of the immune system _ are highly dependent upon the global self-organizing structure of the immune system. It is striking that, even in such a relatively simple system, natural selection cannot be understood outside of the context of additional self-organizing dynamics.


The immune system is composed of around 1012 cells called lymphocytes and around 1020 molecules called antibodies. The primary purpose of the lymphocytes is to create and then release antibodies; and all the antibodies created by any given lymphocyte are identical (save for possible random

errors). The surface of a lymphocyte is coated with a large number of antibodies which it has produced.

    Lymphocytes and antibodies travel throughout the body primarily by way of the bloodstream, entering tissues through capillary walls. They are formed and modified in the bone marrow, the thymus and the spleen; and they periodically pass through the lymphatic vessels, which take in all sorts of cells from the lymphatic capillaries, pass by the lymph nodes (where a large proportion of lymphocytes is always found), and then output back into the bloodstream via the subclavian veins.

    The immune system is in a constant state of flux, creating over a million lymphocytes and a hundred thousand billion antibodies every second. And it is astoundingly various, containing millions of different types of antibodies. To see why so much flux and variety is required, we must naturally ask: what is the function of antibodies?

    The major purpose of the immune system is to protect the body against invaders such as germs and viruses _ the word antigen is used to describe such invaders, and anything else which antibodies attack. The surfaces of most antigens are covered with repetitions of a relatively small number of structures: these structure are "epitopes." The first step in the destruction of an antigen is for a number of antibodies to grab it, to latch onto it; and in order to do this an antibody must itself possess a structure which is to the epitope as a key is to a lock. This corresponding structure, this "key," is called the "paratope" of the antibody. An antibody is shaped like a Y or a T _ something like a lobster. According to this metaphor, its function is to grasp onto epitopes of antigens with its "claws" [see Figure 8].

    This "lock and key" process does not directly result in the destruction of the antigen; the purpose of the network of lymphocytes is primarily recognitive. Once a number of antibodies are latched onto an antigen, other cells move in to assist in the kill. Often an intricate chain reaction takes place in which various types of complement cells converge on the antigen and consume it. Killer T-cells may also play a role, as may nonspecific cellular agents such as phagocytes. We shall not be concerned with this aspect of immune response here.

    A general schematic description such as this inevitably omits a large number of important details. To see what sort of details are being omitted, let us consider one specific type of antigen: the protein molecule. Protein molecules play many important roles in the body (as enymes and hormones, as the building blocks of cellular membranes, etc.), and they also form the outer layer of viruses and bacteria. Each one consists of a number of polypeptide chains, and each polypeptide chain is a long string of a few hundred amino acids (selected from the set of twenty amino acids) bonded end-to-end.

    A polypeptide chain is a "string" of amino acids, but a protein molecule looks more like a clump of spaghetti. The various polypeptide chains twist around each other according to an intricate process, and an antibody sees only the outside of the formation thus obtained. An epitope of a protein molecule is a region of its surface which is small enough to be latched onto by an antibody; such regions, in general, appear to be arrangements of no more than ten or so amino acids. The epitope is an extremely sensitive function of the structure of the protein; sometimes the replacement of a single amino acid can lead to a different epitope.

    Protein molecules are essential for immunology not only because they comprise many of the epitopes encountered in practice, but also because antibodies themselves are protein molecules, consisting of four polypeptide chains: two identical "light" chains containing around 200 amino acids, and two identical "heavy" chains containing around 400. All but about 50 of the first 110 amino acid positions on each polypeptide chain are identical for every antibody; these 50 are called "variable" positions. Although most variable positions can only be filled by two different amino acids, there are certain positions called "hot spots" which can be filled by more than two. At the tip of each variable position is a "combining site"; and the paratope of the antibody is simply the set of all its combining sites. Amino acids may not be assigned to the variable positions independently, but it is not clear exactly how constrained these choices are. It is known, however, that polypeptide chains fall into subgroups such that each member of a given subgroup has identical amino acids in some subset of the variable positions.

    There is one further complication here, one which is relatively poorly understood. We said above that lymphocytes produce antibodies. But in fact, lymphocytes fall into two categories, B- and T-cells; and only B-cells necessarily produce antibodies. The others, called T-cells, fall into a number of subcategories. There are killer T-cells, which are helpful in the actual destruction of antigen, to be discussed below. There are memory T-cells, which increase the rapidity of immune memory. And there are helper and suppressor T-cells, which help or suppress the antibody production of B-cells.


The essential task of the immune system is to solve the optimization problem of finding the paratope appropriate to a given epitope, the key to a given lock. But when members of one antibody type succeed in latchingonto an antigen, this is only the beginning of the immune responses. Once this occurs, natural selection takes over. According to Burnet's hypothesis of clonal selection, when the paratopes of a given lymphocyte are successful at latching onto an epitope of an antigen, the lymphocyte is either suppressed or stimulated, and in the latter case the lymphocyte clones itself. What determines a successful antibody is suppressed and when it is stimulated is largely an open question; however, it is known to be a function of T-cell behavior.

    If a successful lymphocyte is stimulated, then it clones itself and instead of one successful lymphocyte, there are two successful lymphocytes. And the same process applies to these two lymphocytes: if they are both also successful, they will both be cloned and then there will be four. When the antigen is wiped out, this process will stop, because the lymphocytes will no longer be successful.

    There are certain complications: for instance, only some of the "clones" of a given lymphocyte are true clones which are capable of subsequently cloning themselves; these are called "memory lymphocytes." The others are plasma cells, which cannot clone, but simply produce antibodies full-time; these are responsible for the majority of antibodies. A plasma cell is not a true clone in that it lacks the capacity for reproduction, but it does produce the same antibody type as its parent. A plasma cell is more powerful in the short run, but a memory cell, since it can reproduce itself, is more conducive to long-term power.

    Also, there is an interesting twist involved in the definition of "successful." Clonal selection appears to follow a "threshold logic," meaning that for every lymphocyte there is some number T such that the lymphocyte is cloned only after its antibodies latch onto T epitopes. This is biologically quite sensible: it dictates not only that antibody types which are ineffective at latching onto a given antigen are not reproduced, but also that a certain low level of antigen presence will be tolerated. Mathematically, it means that the equations governing the production of antibodies are extremely nonlinear, and hence not only difficult to analyze but potentially subject to highly complex and even chaotic behavior.

    But despite these and other complications, the basic idea of clonal selection is impressively simple: an antibody type is multiplied if and only if it is successful at latching onto an antigen.


Before discussing self-organization, we must address one more essential question: what, exactly, qualifies as an antigen? The immune system does not attack healthy cells of the organism to which it belongs _ but why? It attacks healthy cells from other, similar organisms: for instance, if person A has a patch of the skin of person B grafted onto his arm, the grafted skin will not be taken into the body; it will die because the immune system of person A will treat it as antigen. One might think that the immune system of each person contains genetic information as to the structure of these cells. But, in fact, if cells from an adult mouse are introduced into the body of a developing fetus, when the fetus grows up it will accept transplants from that particular mouse but not others.

    Apparently the immune system, at an early stage, learns which cells are its own and which are not, essentially by reasoning that whatever cells are there all the time are its own. This process may also take place during adulthood _ an immune system may learn to interpret something new as part of itself. But this is often troublesome. In order to promote organ transplants, it is now common to introduce agents which temporarily prevent T-cell activity in the region of the new organ. Newly created T-cells grow up interpreting the new organ as self, and eventually the old T-cells die.

    The most important lesson of the problem of self/non-self discrimination is that the immune system is fundamentally a learning system: in many important respects, it acts not by following genetic commands but rather by figuring things out for itself.


It seems that the immune system does not contain enough antibodies so that, for every antigen, there is a pre-existing antibody which corresponds to it precisely. There are a lot of antibodies, but apparently not that many. And neither does the system simply create antibodies according to genetic instructions as to which antigens are likely to invade the body. It is not inconceivable that this plays a role, but it has been demonstrated that immune systems can respond effectively to antigens which their ancestors could never have encountered (if their ancestors never encountered such antigens, there would be no reason for corresponding antibodies to be genetically coded). It is clear, then, that immune response is not merely a matter of checking a pre-determined set ofpossible paratopes against the epitope. Somehow, the immune system has to actually solve the optimization problem of finding an antibody with a paratope which very nearly "matches" a given epitope. It has to learn how to recognize a given epitope.

    Schematically, let d(x,y) denote the distance between x and y. Then, the task of the immune system is to minimize the function "f(lymphocyte) = d(paratope of lymphocyte, exact match of epitope)" to within a high degree of approximation. The mechanism underlying this optimization is unknown, but Jerne has proposed that

    by a more-or-less random replacement of amino acids in the hot-spot positions of the variable regions of antibody polypeptide chains, a set of millions of antibody molecules is generated with different combining sites that will fit practically any epitope well enough. It has been demonstrated [that] ... individual animals make use of entirely different sets of antibodies capable of recognizing a given epitope.

    Regarding the nature of the randomized replacement which gives rise to high degrees of mutation, there are several different hypotheses. For instance, it is not inconceivable that positions are filled at random by errors introduced in the process of repair; that the enzyme which is used to refill a gap left in a variable region is not ordinary repair DNA polymerase but rather a terminal transferase, which simply introduces any amino acid that happens to be nearby.

    In sum, it is clear that when none of the existing antibodies matches the antigen, the immune system must generate new antibodies. And if this is not done entirely on the basis of a genetic list of antibodies, obviously some form of random mutation must enter into the picture. The essential question is: is this "Monte Carlo" method good enough? This reduces to two immunochemical questions. First: how much does random mutation actually change an antibody? Second: how closely does a paratope have to match an epitope in order to "grab on" with a given probability? Both of these issues are essentially unresolved.


We stated above that the immune system has to learn to distinguish antigen from healthy cells of the organism in which it exists. However,there is one important exception to this rule: one antibody can indeed attack another. They never learn not to do this. If the paratope of one antibody happens to match the epitope of another, then the former will latch onto the latter just as if it were an extraneous antigen. Obviously, this leads to extremely complicated dynamics. One can have a chain of antibody types Ab1, Ab2,...,Abn, in which the paratope of Abi is recognized as an epitope by Abi-1; and one can have a loop, in which the paratope of Ab1 is recognized as an epitope for Abn.

    Such a relationship does not imply that each antibody type is continually waging all-out war against other antibody types; in fact, the avoidance of such situations may be one central purpose of the threshold logic of stimulation. In the above notation, only when the population of Abi exceeds a certain threshold can Abi-1 attain sufficient success through attacking it to be cloned.

    But, despite this restraining factor, a chain reaction is nonetheless possible: if Abn exceeds the threshold of Abn-1, then Abn-1 may be stimulated sufficiently to be cloned; and as a consequence it may exceed the threshold of Abn-2, etc. In the case of a cycle, if this effect reached all the way to Ab1, then it would cause a large number of Ab1 to be produced, which would cause even more Abn to be produced, and set the whole chain reaction off again.

    Obviously it would be highly undesirable for an immune system to harbor such a runaway cycle. Luckily, the immune system is immensely complex; according to Jerne, it contains a vast number of interconnected chains and cycles, so that any runaway cycles are presumably restrained by antibody types lying outside the cycle. Even in the absence of external antigen, it is not necessary that the overall immune system settle into an equilibrium state, in which nothing attacks anything else with sufficient success to be cloned. What is required is that the system is balanced in such a way that runaway chains and runaway cycles are rapidly restricted. Unfortunately, at present we have little theoretical understanding of the conditions under which such a balance is likely to be achieved.

    Some immunologists doubt the existence or importance of this "immune network," this dynamic of interattacking antibody types. They maintain that the immune system is a collection of antibodies independently evolving by clonal selection _ natural selection. However, others believe that it may be precisely the self-referential nature of the immune system _ the dependence of the evolution of each antibody upon the evolution of others _ which accounts for its impressive feats of antigen recognition and memory.

    As for the existence of the immune network, the results of recentcomputer simulations, to be discussed below, support a moderate view. Over the last decade, Alan Perelson (of Los Alamos National Laboratory), Rob de Boer (of the Institute for Bioinformatics in Utrecht), and others have created a number of intriguing computer simulations of the immune system. From the results of these simulations, Perelson has come to the conclusion that "the immune system combines a highly connected network with a functionally disconnected clonal system."

    Some of these simulations model each antibody as a binary sequence _ e.g. 100100100101001010 _ and use various measures of the distance between two binary sequences to gauge the extent to which two antibodies match (Perelson, 1988). Others use partial differential equations to model the surface of an antibody as a series of bumps of various heights. Regardless of the details, the results are qualitatively similar. The result of one of the most recent and realistic studies, deBoer and Perelson (1990), is that when a large group of antibodies are permitted to interact freely according to the known laws of immunology, they tend to join together into "one large connected structure." However, this structure is not a fixed collection of antibodies, it "moves around in shape space," leaving some antibodies behind and picking up others. The entire network is "one large frozen component which enables some isolated clones to remain functionally disconnected from the network." Spontaneously, each antibody winds up matching with a number (say 10-15) of others.


Due to the tremendous success of vaccination, we are all familiar with the ability of the immune system to maintain an antibody type corresponding to an antigen it has encountered in the past. Nonetheless, we do not yet completely understand the means by which this occurs even when the time since the encounter with the antigen has been far longer than the life of a lymphocyte. Somehow, the system knows to generate a certain number of new memory cells of each type every now and then. The nature of immune learning as discussed above, combined with the assumption of a highly intricate and active immune network, provides a natural explanation of this property.

    Consider a two-cycle Ab1, Ab2 (i.e. a situation in which the paratope of each of a pair of antibody types is recognized as an epitope by the other), and assume that an antigen is suddenly introduced into the system. Assume Ab1 has decent success at recognizing epitopes of theantigen, and that it therefore clones and mutates, yielding a slightly different antibody type Ab1% which is significantly more successful with the antigen. Now, since Ab1 has proliferated, Ab2 has also cloned significantly. As long as none of the mutations of Ab1 have proliferated significantly, Ab2 has no need to mutate; we have assumed it does well at latching onto Ab1. But when the population of Ab1% increases, Ab2 is presented with a large number of antibodies which it only comes close to matching perfectly; so it will begin to mutate, and with a little luck this mutation will yield a new strain, Ab2%, which is a near-perfect match for Ab1%.

    So, if we accept that "matching" is symmetric, i.e. that Ab1% is excellent at latching onto Ab2% as well as vice versa, then it is clear that what has occurred is the creation within the immune system of a model of the antigen. The simple process of immune learning yields a clear and elegant explanation of the process of immune memory: the immune system remembers an antigen because it creates a model of the antigen and periodically practices latching onto the model (Jerne, 1975). This explanation is clearly not limited to cycles consisting of two antibody types; the same logic applies to a cycle of any size. If it is accepted that most antibody types are involved in a number of cycles, then this account of immune memory follows from the account of immune learning given above.

    The plausibility of this model of immune memory was verified by the author in 1985, in a highly simplified computer simulation. Since that time, the Perelson and his colleagues (1991) have run much more thorough computer simulations, which demonstrate conclusively that this type of memory could, theoretically, work. Their simulated immune systems, which are relatively realistic, do indeed remember by forming internal images, particularly in situations in which one Ab1 is attended by several Ab2%s.

    Finally, we should mention another interesting outcome of Perelson's computer simulations, which bears indirectly on immune memory: the emergence of frozen components. A frozen component of the immune network is a collection of antibody classes which, owing to the peculiar nature of their interactions, are very stable with respect to perturbations _ very unlikely to change. Frozen components, if there are a lot of them, will partition the space of all possible antibody types into a set of disjoint regions. Perelson's simulations indicate that the immune network is dominated by frozen components, and thus that it consists of a large mass of unchanging antibody types interspersed with families of antibody types subject to rapid change. This provides a sort of middleground between the static and dynamic views of the immune system.


The immune system, for all its complexity, is still much simpler than the brain. And the theories discussed above, though not conclusively proven, are a great deal closer to the data than any speculations about neural pattern recognition or memory can possibly be, given the present state of neuroscience. Therefore, it is worthwhile to ask what these immunological hypotheses suggest about the workings of the brain. This is helpful not only as a method of stimulating new ideas in neuroscience _ it also aids us in interpreting the immune system.

    For instance, we may observe that the immune system does a very good job of pattern recognition despite the tumultuous, often random-looking nature of its dynamics. A great deal of this effectiveness can be explained through the sheer strength of numbers, i.e. the Monte Carlo method. If it makes enough guesses, eventually it is bound to come close to the right answer.

    And, more interestingly, we may ask: what are the neural implications of the network theory of immune memory? Today, no one still believes that the brain stores each datum in its own little compartment. It is generally understood that, somehow, most of the information in the brain is stored holistically, is spread across a wide region (Goertzel, 1992). But little else about long-term memory is agreed upon. Above all, the relation of long-term memory to thought is not well-understood. Is long-term memory merely a repository from which thought draws information? Or is it an integral part of the thought process? Consideration of the immune system suggests the latter. In the immune system, memory seems to works partly through the continual interaction of pattern-recognizing agents. Memory is, partially, a product of turning the pattern-recognition process on itself. This makes it somewhat plausible that neural memory is a product of various "traces" continually acting on each other; an hypothesis which also helps to account for the creative/ destructive unreliability of memory.

    In sum: if the brain is anything like the immune system, its capabilities for pattern recognition and long-term memory emerge in a simple, natural way from the interactions of simple learning agents. In order to remember something, it does not "write" things into locations or firing patterns, but rather automatically keeps that thing alive by having it continually interact with other entities in memory.


The immune system is the simplest system known or suspected to evolve by natural selection. Its evolution takes place over a relatively short time scale, and the entities which are evolving are sufficiently simple that we may quantify them in a tractable and meaningful way. These facts assign immunology a unique role within the theory of evolution.

    If it were demonstrated that self-organizing dynamics played no role in the evolution of the immune system, this would say little about the role of self-organizing dynamics in more complex evolving systems such as the ecosystem, the brain or society. But, what if were demonstrated that the evolution of the immune system could not be understood merely on the basis of natural selection, but that self-organizing dynamics must be taken into account. Then this would imply, heuristically, that the same is likely to be true of more complex evolving systems.

    Immunological data tends to involve specific antibody types or the state of an immune system at a specific time; contemporary laboratory techniques are not up to the task of testing hypotheses about the overall structure and dynamics of the immune system. The computer simulations of Perelson and others, however, indicate very strongly that the immune network does play a prominent role: that one cannot predict the evolution of an immune system without taking into account the presence of a large network of interconnected antibody classes; and that the existence of such a network is an important part of immune memory.

    Perhaps future experimental data will show that these computer models are not as accurate as we presently believe. But even if this happens, I believe that these models will still stand out as landmarks in the history of the science of complex systems. These simulations prove that, in certain very simple evolving systems, natural selection cannot be understood except by admitting that the environment of each evolving entity consists largely of other evolving entities. These model systems were not rigged against "pure" natural selection: each one was endowed with simple, natural dynamical rules and permitted to evolve according to natural selection. The result could well have been that most entities evolved more or less independently of the others. But instead, the result was almost invariably that the whole system took on a life of its own.