Chaotic Logic -- Copyright Plenum Press 1994

Back to Chaotic Logic Table of Contents

Chapter Four


    I have already talked a little about deduction and its role in the mind. In this chapter, however, I will develop this theme much more fully. The relation between psychology and logic is important, not only because of the central role of deductive logic in human thought, but also because it is a microcosm of the relation between language and thought in general. Logic is an example of a linguistic system, and it reveals certain phenomena that are obscured by the sheer complexity of other linguistic systems.


    Today, as John MacNamara has put it, "logicians and psychologists generally behave like the men and women in an orthodox synagogue. Each group knows about the other, but it is proper form that each should ignore the other" (1986, p.1). But such was not always the case. Until somewhere toward the end of nineteenth century, the two fields of logic and psychology were closely tied together. What changed things was, on the one hand, the emergence of experimental psychology; and, on the other hand, the rediscovery and development of elementary symbolic logic by Boole, deMorgan and others.     

    The early experimental psychologists purposely avoided explaining intelligence in terms of logic. Mental phenomena were analyzed in terms of images, associations, sensations, and so forth. And on the other hand -- notwithstanding the psychological pretensions of Leibniz's early logical investigations and Boole's Laws of Thought -- the early logicians moved further and further each decade toward considering logical operationsas distinct from psychological operations. It was increasingly realized on both sides that the formulas of propositional logic have little connection with emotional, intuitive, ordinary everyday thought.

    Of course, no one denies that there is some relation between psychology and logic. After all, logical reasoning takes place within the mind. The question is whether mathematical logic is a very special kind of mental process, or whether, on the other hand, it is closely connected with everyday thought processes. And, beginning around a century ago, both logicians and psychologists have overwhelmingly voted for the former answer.

    The almost complete dissociation of logic and psychology which one finds today may be partly understood as a reaction against the nineteenth-century doctrines of psychologism and logism. Both of these doctrines represent extreme views: logism states that psychology is a subset of logic; and psychologism states that logic is a subset of psychology.

    Boole's attitude was explicitly logist -- he optimistically suggested that the algebraic equations of his logic corresponded to the structure of human thought. Leibniz, who anticipated many of Boole's discoveries by approximately two centuries, was ambitious beyond the point of logism as I have defined it here: he felt that elementary symbolic logic would ultimately explain not only the mind but the physical world. And logism was also not unknown among psychologists -- it was common, for example, among members of the early Wurzburg school of Denkpsychologie. These theorists felt that human judgements generally followed the forms of rudimentary mathematical logic.     

    But although logism played a significant part in history, the role of psychologism was by far the greater. Perhaps the most extreme psychologism was that of John Stuart Mill (1843), who in his System of Logic argued that

Logic is not a Science distinct from, and coordinate with, Psychology. So far as it is a Science at all, it is a part or branch of Psychology.... Its theoretic grounds are wholly borrowed from Psychology....

Mill understood the axioms of logic as "generalizations from experience." For instance, he gave the following psychological "demonstration" of the Law of ExcludedMiddle (which states that for any p, either p or not-p is always true):

The law on Excluded Middle, then, is simply a generalization of the universal experience that some mental states are destructive of other states. It formulates a certain absolutely constant law, that the appearance of any positive mode of consciousness cannot occur without excluding a correlative negative mode; and that the negative mode cannot occur without excluding the correlative positive mode.... Hence it follows that if consciousness is not in one of the two modes it much be in the other (bk. 2, chap.,7, sec.5)

Even if one accepted psychologism as a general principle, it is hard to see how one could take "demonstrations" of this nature seriously. Of course each "mode of consciousness" or state of mind excludes certain others, but there is no intuitively experienced exact opposite to each state of mind. The concept of logical negation is not a "generalization" of but rather a specialization and falsification of the common psychic experience which Mill describes. The leap from exclusion to exact opposition is far from obvious and was a major step in the development of mathematical logic.

    As we will see a little later, Nietzsche (1888/1968) also attempted to trace the rules of logic to their psychological roots. But Nietzsche took a totally different approach: he viewed logic as a special system devised by man for certain purposes, rather than as something wholly deducible from inherent properties of mind. Mill was convinced that logic must follow automatically from "simpler" aspects of mentality, and this belief led him into psychological absurdities.

    The early mathematical logicians, particularly Gottlob Frege, attacked Mill with a vengeance. For Frege (1884/1952) the key point was the question: what makes a sentence true? Mill, as an empiricist, believed that all knowledge must be derived from sensory experience. But Frege countered that "this account makes everything subjective, and if we follow it through to the end, does away with truth" (1959, p. vii). He proposed that truth must be given a non-psychological definition, one independent of the dynamics of any particular mind. This Fregean conception of truth received its fullestexpression in Tarski's (1935) and Montague's (1974) work on formal semantics, to be discussed in Chapter Five.

    To someone acquainted with formal logic only in its recent manifestations, the very concept of psychologism is likely to seem absurd. But the truth is that, before the work of Boole, Frege, Peano, Russell and so forth transformed logic into an intensely mathematical discipline, the operations of logic did have direct psychological relevance. Aristotle's syllogisms made good psychological sense (although we now know that much useful human reasoning relies on inferences which Aristotle deemed incorrect). The simple propositional logic of Leibniz and Boole could be illustrated by means of psychological examples. But the whole development of modern mathematical logic was based on the introduction of patently non-psychological axioms and operations. Today few logicians give psychology a second thought, but for Frege it was a major conceptual battle to free mathematical logic from psychologism.

    In sum, psychologists ignored those few voices which insisted on associating everyday mental processes with mathematical logic. And, on the other hand, logicians actively rebelled against the idea that the rules of mathematical logic must relate to rules of mental process. Psychology benefited from avoiding logism, and logic gained greatly from repudiating psychologism.

4.1.1. The Rebirth of Logism

    But, of course, that wasn't the end of the story. Although contemporary psychology and logic have few direct relations with one another, in the century since Frege there has arisen a brand new discipline, one that attempts to bring psychology and logic closer together than they ever have been before. I am speaking, of course, about artificial intelligence.

    Early AI theorists -- in the sixties and early seventies -- brought back logism with a vengeance. The techniques of early AI were little more than applied Boolean logic and tree search, with a pinch or two of predicate calculus, probability theory and other mathematical tricks thrown in for good measure. But every few years someone optimistically predicted that an intelligent computer was just around the corner. At this stage AI theorists basically ignored psychology -- they felt that deductive logic, and deductive logic alone, was sufficient for understanding mental process.

    But by the eighties, AI was humbled by experience. Despite some incredible successes, nothing anywhere neara "thinking machine" has been produced. No longer are AI theorists too proud to look to psychology or even philosophy for assistance. Computer science still relies heavily on formal logic -- not only Boolean logic but more recent innovations such as model theory and non-well-founded sets (Aczel, 1988) -- and AI is no exception. But more and more AI theorists are wondering now if modern logic is adequate for their needs. Many, dissatisfied with logism, are seeking to modify and augment mathematical logic in ways that bring it closer to human reasoning processes. In essence, they are augmenting their vehement logism with small quantities of the psychologism which Frege so abhorred.

4.1.2. The Rebirth of Psychologism

    This return to a limited psychologism is at the root of a host of recent developments in several different areas of theoretical AI. Perhaps the best example is nonmonotonic logic, which has received a surprising amount of attention in recent years. But let us dwell, instead, on an area of research with more direct relevance to the present book: automated theorem proving.

    Automatic theorem proving -- the science of programming computers to prove mathematical theorems -- was once thought of as a stronghold of pure deductive logic. It seemed so simple: just apply the rules of mathematical logic to the axioms, and you generate theorems. But now many researchers in automated theorem proving have realized that this is only a very small part of what mathematicians do when they prove theorems. Even in this ethereal realm of reasoning, tailor-made for logical deduction, nondeductive, alogical processes are of equal importance.

    For example, after many years of productive research on automated theorem proving, Alan Bundy (1991) has come to the conclusion that

Logic is not enough to understand reasoning. It provides only a low-level, step by step understanding, whereas a high-level, strategic understanding is also required. (p. 178)

Bundy proposes that one can program a computer to demonstrate high-level understanding of mathematical proofs, by supplying it with the ability to manipulate entities called proof plans.

    A proof plan is defined as a common structure that underlies and helps to generate many differentmathematical proofs. Proof plans are not formulated based on mathematical logic alone, they are rather

refined to improve their expectancy, generality, prescriptiveness, simplicity, efficiency and parsimony while retaining their correctness. Scientific judgement is used to find a balance between these sometimes opposing criteria. (p.197)

In other words, proof plans, which control and are directed by deductive theorem-proving, are constructed and refined by illogical or alogical means.

    Bundy's research programme -- to create a formal, computational theory of proof plans -- is about as blatant as pychologism gets. In fact, Bundy admits that he has ceased to think of himself as a researcher in automated theorem proving, and come to conceive of himself as a sort of abstract psychologist:

For many years I have regarded myself as a researcher in automatic theorem proving. However, by analyzing the methodology I have pursued in practice, I now realize that my real motivation is the building of a science of reasoning.... Our science of reasoning is normative, empirical and reflective. In these respects it resembles other human sciences like linguistics and Logic. Indeed it includes parts of Logic as a sub-science. (p. 197)

How similar this is, on the surface at least, to Mill's "Logic is ... a part or branch of Psychology"! But the difference, on a deeper level, is quite large. Bundy takes what I would call a Nietzschean rather than a Millean approach. He is not deriving the laws of logic from deeper psychological laws, but rather studying how the powerful, specialized reasoning tool that we call "deductive logic" fits into the general pattern of human reasoning.


    Bundy defends what I would call a "limited Boolean logism." He maintains that Boolean logic and related deductive methods are an important part of mental process, but that they are supplemented by and continually affected by other mental processes. At first sight, this perspective seems completely unproblematic. We think logically when we need to, alogically when we need to; and sometimes the two modes of cognition will interact. Very sensible.

    But, as everyone who has taken a semester of university logic is well aware, things are not so simple. Even limited Boolean logism has its troubles. I am speaking about the simple conceptual conundrums of Boolean logic, such as Hempel's paradox of confirmation and the paradoxes of implication. These elementary "paradoxes," though so simple that one could explain them to a child, are obstacles that stand in the way of even the most unambitious Boolean logism. They cast doubt as to whether Boolean logic can ever be of any psychological relevance whatsoever.

4.2.1. Boolean Logic and Modern Logic

    One might well wonder, why all this emphasis on Boolean logic. After all, from the logician's point of view, Boolean logic -- the logic of "and", "or" and "not" -- is more than a bit out-of-date. It does not even include quantification, which was invented by Peirce before the turn of the century. Computer circuits are based entirely on Boolean logic; however, modern mathematical logic has progressed as far beyond Leibniz, Boole and deMorgan as modern biology has progressed beyond Cuvier, von Baer and Darwin.

    But still, it is not as though modern logical systems have shed Boolean logic. In one way or another, they are invariably based on Boolean ideas. Mathematically, nearly all logical systems are "Boolean algebras" -- in addition to possessing other, subtler structures. And, until very recently, one would have been hard put to name a logistic model of human reasoning that did not depend on Boolean logic in a very direct way. I have already mentioned two exceptions, nonmonotonic logic and proof plans, but these are recent innovations and still in very early stages of development.

    So the paradoxes of Boolean logic are paradoxes of modern mathematical logic in general. They are the most powerful weapon in the arsenal of the contemporary anti-logist. Therefore, the most sensible way to begin our quest to synthesize psychology and logic is to dispense with these paradoxes.

    Paradoxes of this nature cannot be "solved." They are too simple for that, too devastatingly fundamental. So my aim here is not to "solve" them, but rather to demonstrate that they are largely irrelevant to theproject of limited Boolean logism -- if this project is carried out in the proper way. This demonstration is less logical than psychological. I will assume that the mind works by pattern recognition and multilevel optimization, and show that in this context Boolean logic can control mental processes without succumbing to the troubles predicted by the paradoxes.

4.2.2. The Paradoxes of Boolean Logic

    Before going any further, let us be more precise about exactly what these "obstacles" are. I will deal with four classic "paradoxes" of Boolean logic:

     1. The first paradox of implication. According to the standard definition of implication one has "a --> (b --> a)" for all a and b. Every true statement is implied by anything whatsoever. For instance, the statement that the moon is made of green cheese implies the statement that one plus one equals two. The statement that Lassie is a dog implies the statement that Ione Skye is an actress. This "paradox" follows naturally from the elegant classical definition of "a --> b" as "either b, or else not a". But it renders the concept of implication inadequate for many purposes.     

     2. The second paradox of implication. For all a and c, one has "not-c --> (c --> a)". That is, if c is false, then c implies anything whatsoever. From the statement that George Bush has red hair, it follows that psychokinesis is real.

     3. Contradiction sensitivity. In the second paradox of implication, set c equal to the conjunction of some proposition and its opposite. Then one has the theorem that, if "A and not-A" is true for any A, everything else is also true. This means that Boolean logic is incapable of dealing with sets of data that contain even one contradiction. For instance, assume that "I love my mother", and "I do not love my mother" are both true. Then one may prove that 2+2=5. For surely "I love my mother" implies "I love my mother or 2+2=5" (in general, "a --> (a or b) ). But, just as surely, "I do not love my mother" and "I love my mother or 2+2=5", taken together, imply "2+2=5" (in general, [a and (not-a or b)] --> b). Boolean logic is a model of reasoning in which ambivalence about one's feelings for one's mother leads naturally to the conclusion that 2+2=5.

     4. Hempel's confirmation paradox. According to Boolean logic, "all ravens are black" is equivalent to "all nonblack entities are nonravens". That is,schematically, "(raven --> black) --> (not-black --> not-raven)". This is a straightforward consequence of the standard definition of implication. But is it not the case that, if A and B are equivalent hypotheses, evidence in favor of B is evidence in favor of A. It follows that every observation of something which is not black and also not a raven is evidence that ravens are black. This is patently absurd.

4.2.3. The Need for New Fundamental Notions

    The standard method for dealing with these paradoxes has to acknowledge them, then dismiss them as irrelevant. In recent years, however, this evasive tactic has grown less common. There have been several attempts to modify standard Boolean-based formal logic in such a way as to avoid these difficulties: relevant logics (Read, 1988), paraconsistent logics (daCosta, 1984), and so forth.

    Some of this work is of very high quality. But in a deeper conceptual sense, none of it is really satisfactory. It is, unfortunately, not concrete enough to satisfy even the most logistically inclined psychologist. There is a tremendous difference between a convoluted, abstract system jury-rigged specifically to avoid certain formal problems, and a system with a simple intuitive logic behind it.

    An interesting commentary on this issue is provided by the following dialogue, reported by Gian-Carlo Rota (1985). The great mathematician Stanislaw Ulam was preaching to Rota about the importance of subjectivity and context in understanding meaning. Rota begged to differ (at least partly in jest):

"But if what you say is right, what becomes of objectivity, an idea that is so definitively formulated by mathematical logic and the theory of sets, on which you yourself have worked for many years of your youth?"

Ulam answered with "visible emotion":

"Really? What makes you think that mathematical logic corresponds to the way we think? You are suffering from what the French call a deformation professionelle. ..."

    "Do you then propose that we give up mathematical logic?" said I, in fake amazement.

    "Quite the opposite. Logic formalizes only very few of the processes by which we actuallythink. The time has come to enrich formal logic by adding to it some other fundamental notions. ... Do not lose your faith," concluded Stan. "A mighty fortress is mathematics. It will rise to the challenge. It always has."

    Ulam speaks of enriching formal logic "by adding to it some other fundamental notions." More specifically, I suggest that we must enrich formal logic by adding to it the fundamental notions of pattern and multilevel control, as discussed above. The remainder of this chapter is devoted to explaining how, if one views logic in the context of pattern and multilevel control, all four of the "paradoxes" listed above are either resolved or avoided.

    This explanation clears the path for a certain form of limited Boolean logism -- a Boolean logism that assigns at least a co-starring role to pattern and multilevel control. And indeed, in the chapters to follow I will develop such a form of Boolean limited logism, by extending the analysis of logic given in this chapter to more complex psychological systems: language and belief systems.


    Let us begin with the first paradox of implication. How is it that a true statement is implied by everything?

    This is not our intuitive notion of consequence. Suppose one mental process has a dozen subsidiary mental processes, supplies them all withstatement A, and asks each of them to tell it what follows from A. What if one of these subsidiary processes responds by outputting true statements at random? Justified, according to Boolean logic -- but useless! The process should not survive. What the controlling process needs to know is what one can use statement A for -- to know what follows from statement A in the sense that statement A is an integral part of its demonstration.

    This is a new interpretation of "implies." In this view, "A implies B" does not mean simply "-B + A", it means that A is an integral part of a natural reasoning process leading towards B. It means that A is helpful in arriving at B. Intuitively, it means that, when one sees that someone has arrived at the conclusion B, it is plausible to assume that they arrived at A first andproceeded to B from there. If one looks at implication this way -- structurally, algorithmically, informationally -- then the paradoxes are gone.

    In other words, according to the informational definition, A significantly implies B if it is sensible to use A to get B. The mathematical properties of this definition have yet to be thoroughly explored. However, it is clear that a true statement is no longer significantly implied by everything: the first paradox of implication is gone.

    And the second paradox of implication has also disappeared. A false statement no longer implies everything, because the generic proof of B from "A and not-A" makes no essential use of A; A could be replaced by anything whatsoever.

4.3.1. Informational Implication (*)

    In common argument, when one says that one thing implies another, one means that, by a series of logical reasonings, one can obtain the second thing from the first. But one does not mean to include series of logical reasonings which make only inessential use of the first thing. One means that, using the first thing in some substantial way, one may obtain the second through logical reasoning. The question is, then, what does use mean?

    If one considers only formulas involving --> (implication) and - (negation), it is possible to say something interesting about this in a purely formal way. Let B1,...,Bn be a proof of B in the deductive system T union {A}, where T is some theory. Then, one might define A to be used in deriving Bi if either

    1) Bi is identical with A, or

    2) Bi is obtained, through an application of one of the rules of inference, from Bj's with j<i, and A is used for deriving at least one of these Bj's.

    But this simplistic approach becomes hopelessly confused when disjunction or conjunction enters into the picture. And even in this uselessly simple case, it has certain conceptual shortcomings. What if there is a virtually identical proof of A which makes no use of A? Then is it not reasonable to say that the supposed "use" of A is largely, though not entirely, spurious?

    It is not inconceivable that a reasonable approximation of the concept of use might be captured by some complex manipulation of connectives. However, I contend that what use really has to do with is structure. Talking about structure is not so cut-and-dried astalking about logical form -- one always has a lot of loose parameters. But it makes much more intuitive sense.

    Let GI,T,v(B) denote the set of all valid proofs of B, relative to some fixed "deductive system" (I,T), of complexity less than v. An element of GI,T,v is a sequence of steps B0,B1,...,Bn-1, where Bn=B, and for k>0 Bk follows from Bk-1 by one of the transformation rules T. Where Z is an element of GI,T,v(B), let L(Z) = |B|/|Z|. This is a measure of how much it simplifies B to prove it via Z.

    Where GI,T,v(B) = {Z1,...,ZN}, and p is a positive integer, let

A = L(Z1)*[I(Z1|Y)]1/p + L(Z2)*[I(Z2|Y)]1/p + ... + L(ZN)*[I(ZN|Y)]1/p

B = I(Z1|Y)]1/p + I(Z2|Y)]1/p + ... + [I(ZN|Y)]1/p

Qp,v = A/B

Note that, since I(Zi|Y) is always a positive integer, as p tends to infinity, Qp,v tends toward the value L(Z)*I(Z|Y), where Z is the element of GI,T,v that minimizes I(Z|N). The smaller p is, the more fairly the value L(Z) corresponding to every element of GI,T,v is counted. The larger p is, the more attention is focused on those proofs that are informationally close to Y. The idea is that those proofs which are closer to Y should count much more than those which are not.

     Definition: Let | | be a complexity measure (i.e., a nonnegative-real-valued function). Let (I,T) be a deductive system, let p be a positive integer, and let 0<c<1. Then, relative to | |, (I,T), p and c, we will say A significantly implies B to degree K, and write

    A -->K B

if K = cL+(1-c)M is the largest of all numbers such that for some v there exists an element Y of GI,T,v so that

        1) A=B0 (in the sequence of deductions described by Y)

        2) L = L(Y) = |B|/[|Y|],

        3) M = 1/Qp,|Y|

    According to this definition, A significantly implies B to a high degree if and only if B is an integral part of a "natural" proof of A. The "naturalness" of the proof Y is guaranteed by clause (3), which says that by modifying Y a little bit, it is not so easy to get a simpler proof. Roughly, clause (3) says that Y is an "approximate local minimum" of simplicity, in proof space.

     This is the kind of implication that is useful in building up a belief system. For, under ordinaryimplication there can never be any sense in assuming that, since A --> Bi, i=1,2,...,N, and the Bi are true, A might be worth assuming. After all, by contradiction sensitivity a false statement implies everything. But things are not so simple under relevant implication. If a statement A significantly implies a number of true statements, that means that by appending the statement A to one's assumption set I, one can obtain quality proofs of a number of true statements. If these true statements also happen to be useful, then from a practical point of view it may be advisable to append A to I. Deductively such a move is not justified, but inductively it is justified. This fits in with the general analysis of deduction given in SI, according to which deduction is useful only insofar as induction justifies it.


    Having dealt with implication, let us now turn to the paradox of contradiction sensitivity. According to reasoning given above, if one uses propositional or predicate calculus to define the transformation system T, one easily arrives at the following conclusion: if any two of the propositions in I contradict each other, then D(I,T) is the entire set of all propositions. From one contradiction, everything is derivable.

    This property appears not to reflect actual human reasoning. A person may contradict herself regarding abortion rights or the honesty of her husband or the ultimate meaning of life. And yet, when she thinks about theoretical physics or parking her car, she may reason deductively to one particular conclusion, finding any contradictory conclusion ridiculous.

    In his Ph.D. dissertation, daCosta (1984) conceived the idea of a paraconsistent logic, one in which a single contradiction in I does not imply everything. Others have extended this idea in various ways. More recently, Avram (1990) has constructed a paraconsistent logic which incorporates the idea of "relevance logic." Propositions are divided into classes and the inference from A to A+B is allowed only when A and B are in the same class. The idea is very simple: according to Avram, although we do use the "contradiction-sensitive" deductive system of standard mathematical logic, we carefully distinguish deductions in one sphere from deductions in another, so that we never, in practice, reason "A implies A orB", unless A and B are in the same "sphere" or "category."

    For instance, one might have one class for statements about physics, one for statements about women, et cetera. The formation of A or B is allowed only if A and B belong to the same class. A contradiction regarding one of these classes can therefore destroy only reasoning within that class. So if one contradicted oneself when thinking about one's relations with one's wife, then this might give one the ability to deduce any statement whatsoever about domestic relations -- but not about physics or car parking or philosophy.

    The problem with this approach is its arbitrariness: why not one class for particle physics, one for gravitation, one for solid-state physics, one for brunettes, one for blondes, one for redheads,.... Why not, following Lakoff's (1987) famous analysis of aboriginal classification systems, one category for women, fire and dangerous things?

    Of course, it is true that we rarely make statements like "either the Einstein equation has a unique solution under these initial-boundary conditions or that pretty redhead doesn't want anything more to do with me." But still, partitioning is too rigid -- it's not quite right. It yields an elegant formal system, but of course in any categorization there will be borderline cases, and it is unacceptable to simply ignore these away.

    The "partitioning" approach is not the only way of defining relevance formally. But it seems to be the only definition with any psychological meaning. Read (1988), for instance, disavows partitioning. But he has nothing of any practical use to put in its place. He mentions the classical notion of variable sharing -- A and B are mutually relevant if they have variables in common. But he admits that this concept is inadequate: for instance, "A" and "-A + B" will in general share variables, but one wishes to forbid their combination in a single expression. He concludes by defining entailment in such a way that

[T]he test of whether two propositions are logically relevant is whether either entails the other. Hence, relevance cannot be picked out prior ... to establishing validity or entailment....

But the obvious problem is, this is not really a definition of relevance:

It may of course be objected that this suggested explication of relevance is entirely circular andunilluminating, since it amounts to saying no more than that two propositions are logically relevant if either entails the other....

Read's account of relevance is blatantly circular. Although it is not unilluminating from the formal-logical point of view; it is of no psychological value.

4.4.1 Contradiction and the Structure of Mind

    There is an an alternate approach: to define relevance not by a partition into classes but rather in terms the theory of structure. It is hypothesized that a mind does not tend to form the disjunction A or B unless the size

    %[(St(A union v)-St(v)]-[St(B union w)-St(w)]%

is small for some (v,w), i.e. unless A and B are in some way closely related. In terms of the structurally associative memory model, an entity A will generally be stored near those entities to which it is closely related, and it will tend to interact mainly with these entities.

    As to the possibility that, by chance, two completely unrelated entities will be combined in some formula, say A or B, it is admitted that this could conceivably pose a danger to thought processes. But the overall structure of mind dictates that a part of the mind which succumbed to self-contradiction and the resulting inefficiency, would soon be ignored and dismantled.

    According to the model of mind outlined above, each mental process supervises a number -- say a dozen -- of others. Suppose these dozen are reasoning deductively, and one of them falls prey to an internal self-contradiction, and begins giving out random statements. Then how efficient will that self-contradicting process be? It will be the least efficient of all, and it will shortly be eliminated and replaced. Mind does not work by absolute guarantees, but rather by probabilities, safeguards, redundancy and natural selection.

4.4.2. Contradiction and Implication

    We have given one way of explaining why contradiction sensitivity need not be a problem foractual minds. But, as an afterthought, it is worth briefly noting that one may also approach the problem from the point of view of relevant implication. The step from " A and not-A" to B involves the step "not-A --> A or B". What does our definition of significant implication say about this? A moment's reflection reveals that, as noted above, clause (3) kicks in here: A is totally indispensible to this proof of B; A could just as well be replaced by C, D, E or any other proposition. The type of implication involved in contradiction sensitivity is not significant to a very high degree.


    Finally, what of Hempel's confirmation paradox? Why, although "all ravens are black" is equivalent to "all non-black entities are non-ravens," is an observation of a blue chair a lousy piece of evidence for "all ravens are black"?

    My resolution is simple, and not conceptually original. Recall the "infon" notation introduced in Section 2. Just because s |-- i //x to degree d, it is not necessarily the case that s |-- j //x to degree d for every j equivalent to i under the rules of Boolean logic. This is, basically, all that needs to be said. Case closed, end of story. Boolean logic is a tool. Only in certain cases does the mind find it useful.

    That the Boolean equivalence of i and j does not imply the equality of d(s,i,x) and d(s,j,x) is apparent from the definition of degree given above. The degree to which (s,k,x) holds was defined in terms of the intensity with which the elements of k are patterns in s, where complexity is defined by s. Just because i and j are Booleanly equivalent, this does not imply that they will have equal algorithmic information content, equal structure, equal complexity with respect to some observer s. Setting things up in terms of pattern, one obtains a framework for studying reasoning in which Hempel's paradox does not exist.

4.5.1 A More Psychological View

    In case this seems too glib, let us explore the matter from a more psychological perspective. Assume that "All ravens are black" happens to hold with degree d, in my experience, from my perspective. Then to whatdegree does "All non-black entities are non-ravens" hold in my experience, from my perspective?

    "All ravens are black" is an aid in understanding the nature of the world. It is an aid in identifying ravens. It is a significant pattern in my world that those things which are typically referred to with the label "raven," are typically possessors of the color black. When storing in my memory a set of experiences with ravens, I do not have to store with each experience the fact that the raven in question was black -- I just have to store, once, the statement that all ravens are black, and then connect this in my memory to the various experiences with ravens.

    Now, what about "All non-black entities are non-ravens"? What good does it do me to recognize this? How does it simplify my store of memories? It does not, not hardly at all. When I call up a non-black entity from my memory, I will not need to be reminded that it is not a raven. Why would I have thought that it was a raven in the first place? "Raven-ness?" is not one of the questions which it is generally useful or interesting to ask about entities, whereas on the other hand "color?" is one of the questions which it is often interesting to ask about physical objects such as birds.

    So, the real question with Hempel's paradox is, what determines the degree assigned to a given proposition s |-- i //x. It is not purely the logical form of the proposition, but rather the degree to which the proposition is useful to x, i.e. the emergence between the proposition and the other entities which neighbor it in the memory of x. Degree is determined by psychological dynamics, rather than Boolean logic. Formally, one may say: the logic of memory organization is what determines the subjective complexity measure associated with x.

    It is not always necessary to worry about where the degrees associated with propositions come from. But when one is confronted with a paradox regarding degrees, then it is necessary to worry about it. The real moral of Hempel's paradox, as I see it, is that one should study confirmation in terms of the structure and dynamics of the mind doing the confirming. Studying confirmation otherwise, "in the abstract," borders on meaningless.

    In Hempel's paradox one is once again confronted with "what follows what." Boolean logic says that one's belief in "all ravens are black" should be increased following observation of a blue chair. But in fact, observing a blue chair does and should not lead to an increase in one's belief in "all ravens are black." Hempel's paradox is a sort of quantitative version of the paradox of implication -- instead of logic saying that B follows from A when it doesn't, one has logic saying that an increase in belief of B follows from an increase in belief in A when it doesn't.


    At about the same time that Frege, Peano and the rest were laying the foundations of modern mathematical logic, Friedrich Nietszche was creating his own brilliantly ideosyncratic view of the world. This world-view was obscure during Nietzsche's lifetime but, as he predicted, it turned out to be enormously influential throughout the twentieth century.

    While the developments of the preceding sections lie squarely within the tradition begun by Frege and Peano, they also fit nicely into the context of Nietszche's thought. In this section I will take a brief detour from our formal considerations, to explore this observation. In Chapter Ten -- after dealing with belief and language -- I will return to Nietzsche's thought, to help us understand the relation between logic, language, consciousness, reality and belief.

4.6.1. The Will to Power

    Nietzsche declared consciousness irrelevant and free will illusory. He proposed that hidden structures and processes control virtually everything we feel and do. Although this is a commonplace observation now, at the time it was a radical hypothesis. Nietszche made the first sustained effort to determine the nature of what we now call "the unconscious mind." The unconscious, he suggested, is made up of nothing more or less than "morphology and the will to power." The study of human feelings and behavior is, in Nietszche's view, the study of the various forms of the will to power.

    From the start, Nietszche was systematically antisystematic; he would have ridiculed anyone who suggested making a chart of all the possible forms of the will to power. Instead, he concentrated on applying his idea to a variety of phenomena. In Human, All Too Human he analyzed hundreds of different human activities in terms of greed, lust, envy and other simple manifestations of the will to power. Substantial parts of The Genealogy of Morals, Beyond Good and Evil, and The Twilight of the Idols were devoted to studying ascetics,philosophers, and other personality types in a similar way. Two entire books -- The Case of Wagner and Nietszche contra Wagner -- were devoted to the personality, music and philosophy of Richard Wagner. The Antichrist attempted a psychoanalysis of Jesus. And in Ecce Homo, he took on perhaps his most difficult subject: himself.

    Nietszche was anything but objective. In fact his writings often appear delusional. His most famous book, Thus Spake Zarathustra, is written in a bizarrely grandiose mock-Biblical style. And Ecce Homo contains chapter titles such as "Why I Am So Wise", "Why I Am So Clever", and "Why I am a Destiny", as well as a lengthy description of his diet. But Nietszche did not mind appearing crazy. He did not believe in an objective logic, and he repeatedly stressed that what he wrote down were only his personal truths. He encouraged his readers to discover their own truths.

    He did not, however, believe that everyone's personal truth was equally valuable. According to Nietszche, only a person with the strength to contradict himself continually and ruthlessly can ever arrive at significant insights. A person lacking this strength can only repeat the illusions that make him feel powerful, the illusions that enhance the power of the society which formed him. A person possessing this strength possesses power over himself, and can therefore grope beyond illusion and make a personal truth which is genuinely his own.

4.6.2. Nietzsche on Logic

    Logic, according to Nietszche, is simply one particularly fancy manifestation of the will to power. At the core of mathematics and logic is the "will to make things equal" -- the collection of various phenomena into classes, and the assumption that all the phenomena in each class are essentially the same. Nietszche saw this as a lie. It is a necessary lie, because without it generalization and therefore intelligence is impossible. As Nietszche put it in his notebooks [1968a, p. 277],

     [T]he will to equality is the will to power... the consequence of a will that as much as possible shall be equal.

    Logic is bound to the condition: assume there are identical cases. In fact, to make possible logical thinking and inferences, this conditionmust first be treated fictitiously as fulfilled....

    The inventive force that invented categories labored in the service of our needs, namely of our need for security, for quick understanding on the basis of signs and sounds, for means of abbreviation....

    So logic is a lie, but a necessary one. It is also a lie which tends to make itself subjectively true: when an intelligence repeatedly assumes that a group of phenomena are the same for purposes of calculation, it eventually comes to believe the phenomena really are identical. To quote Nietszche's notebooks again (1968a, p. 275):

    It cannot be doubted that all sense-perceptions are permeated with value judgements.... First images..... Then words, applied to images. Finally concepts, possible only when there are words -- the collecting together of many images in something nonvisible but audible (word). The tiny amount of emotion to which the "word" gives rise, as we contemplate similar images for which one word exists -- this weak emotion is the common element, the basis of the concept. That weak sensations are regarded as alike, sensed as being the same, is the fundamental fact. Thus confusion of two sensations that are close neighbors, as we take note of these sensations.... Believing is the primal beginning even in every sense impression....

    The valuation "I believe that this and that is so" is the essence of truth. In valuations are expressed conditions of preservation and growth. All our organs of knowledge and our senses are developed only with regard to conditions of preservations and growth. Trust in reason and its categories, and dialectic, therefore the valuation of logic, proves only their usefulness for life, proved by experience -- not that something is true.

    That a great deal of belief must be present; that judgements may be ventured; that doubt concerning all essential values is lacking -- that is the precondition of every living thing and its life. Therefore, what is needed is that somethingmust be held to be true -- not that something is true.

    "The real and the apparent world" -- I have traced this antithesis back to value relations. We have projected the conditions of our preservation as predicates of being in general. Because we have to be stable in our beliefs if we are to prosper, we have made the "real" world a world not of change and becoming, but one of being.

    This is what Nietzsche meant when he wrote "there are no facts, only interpretations." A fact is an interpretation which someone has used so often that they have come to depend upon it emotionally and cannot bear to conceive that it might not reflect a "true" reality. As an example of this, he cited the Aristotelian law of contradiction, which states that "A and not-A" is always false, no matter what A is:

    We are unable to affirm and to deny one and the same thing: this is a subjective empirical law, not the expression of any 'necessity' but only of an inability.

    If, according to Aristotle, the law of contradiction is the most certain of all principles, if it is the ultimate and most basic, upon which every demonstrative proof rests, if the principle of every axiom lies in it; then one should consider all the more rigorously what presuppositions already lie at the bottom of it. Either it asserts something about actuality, about being, as if one already knew this from another source; that is, as if opposite attributes could not be ascribed to it. Or the proposition means: opposite attributes should not be ascribed to it. In that case, logic would be an imperative, not to know the true, but to posit and arrange a world that shall be called true by us.

    Note how different this is from Mill's shallow psychologism. In the Introduction I quoted Mill's "derivation" of the Law of Excluded Middle (which is equivalent to the law of contradiction, by an application of deMorgan's identities). Mill sought to justify this and other rules of logic by appeal to psychological principles. In Mill's view, the truth of "A or not-A" follows from the fact that each idea has a "negative idea," and whenever an idea is not present, its negative is. This is a very weak argument. One could make a stronger psychological argument for the falsity of "A and not-A" -- namely, one could argue that the mind cannotsimultaneously entertain two contradictory ideas. But Nietzsche's point is that even this more plausible argument is false. As we all know from personal experience, the human mind can entertain two contradictory ideas at once. We may try to avoid this state of mind, but it has a habit of coming up over and over again: "I love her/ I don't love her", "I want to study for this test/ I want to listen to the radio instead". The rule of non-contradiction is not, as Mill would have it, correct because it reflects the laws of mental process -- it is, rather, something cleverly conceived by human minds, in order to provide for more effective functioning in certain circumstances.

    One rather simplistic and stilted way of phrasing Nietszche's view of the world is as follows: intelligence is impossible without a priori assumptions and rough approximation algorithms, so each intelligent system (each culture, each species) settles on those assumptions and approximations that appear serve its goals best, and accepts them as "true" for the sake of getting on with life. Logic is simply one of these approximations, based on the false assumption of equality of different entities, and many auxiliary assumptions as well.

    This is not all that different from Saint Augustine's maxim "I believe, so that I may understand." Augustine -- like Leibniz, Nietzsche and the existentialists after him and like the Buddhists and Sophists before him -- realized that thought cannot proceed without assuming some dogmatic presupposition as a foundation. But the difference in attitude between Augustine and Nietzsche is striking. Augustine wants you to believe in exactly what he does, so that you will understand things the same way he does. Nietzsche, on the other hand, wants you to believe and not believe at the same time; he wants you to assume certain approximations, to commit yourself to them, while at the same time continually realizing their tentative nature.

    So, what does all this have to do with the mathematical ideas of the preceding sections? Nietzsche saw a universal form underlying the various possible forms of logic -- the will to power. I do not disagree with this diagnosis, but I feel that it is too abstract. The structural logic described above is NIetzschean in spirit, but it is more detailed than anything Nietszche ever said about logic: it makes explicit the dependence of logical reasoning processes on the biases, experiences and abilities of the mind that is doing the reasoning. It tries to capture this dependence in a precise,mathematical way. The "a priori assumptions and rough approximation algorithms" come into play in the process of pattern recognition, of complexity evaluation.

    Logic is not a corollary of other psychological functions, it is a special psychological function of relatively recent invention, one with its own strengths, weaknesses and peculiarities. But it has neither meaning or utility outside of the context of the mind which maintains it and which it helps to maintain. This was Nietzsche's view of logic, and it fits in rather well with the more formal explorations given above.