**Stephen Wolfram’s A New
Kind of Science**

* *

**A Complexity Scientist’s
Reaction**

Who is this guy who claims to have created a new kind of science?

Not a fool, for sure. A bit of a child prodigy, Stephan Wolfram graduated Cal Tech in his teenage years, and began his career as a theoretical physicist. He then went on to revolutionize the field of cellular automata, transforming it from an obscure corner of mathematics into a leading subfield of the emerging new discipline of complexity science. Eventually he entered the business world, founding and making his fortune from Mathematica, the company that made symbolic computer algebra practical and widespread for the first time.

But unlike many scientists who turn to business, Wolfram did not give up his scientific goals. He ran his company by day and worked on his Theory of Everything by night. But not a physicist’s Theory of Everything – a complexity scientist’s theory of everything. And not just a theory of everything – a new approach to creating theories of everything, and of particular things as well.

And now, in early 2002, we have the
result of Wolfram’s 15+ years of night-time labor, a 1000-plus-page tome
modestly entitled *A New Kind of Science*. And a book that, as the author repeatedly informs us, he believes
is one of the most important in the history of science. In past years, describing the work he was
doing leading up to the book, Stephan Wolfram has compared himself to no less
than Isaac Newton.

Is this stuff really *that* important? Well… maybe. Frankly, I doubt it. But
it’s a mighty interesting book nonetheless, with a goodly number of radical and
fascinating ideas.

The book’s biggest flaw, as a work of scientific literature, is precisely the astoundingly high esteem in which the author holds his own thinking. There is an irritating density of passages in which the author takes personal credit for ideas that are “common knowledge” among experts in the relevant fields. For instance, he says

*“The discoveries in this book
suggest a new and quite different mechanism that I believe is in fact
responsible for most of the examples of great complexity that we see in
biology.* (p. 14)

Well, actually, the “new and quite different” biological
mechanism he “discovers” in the book is basically *self-organizing epigenesis*
– underemphasized by the mainstream evolutionary biology community, but hardly
new (see refs in Goertzel, 1993). And
he says

*“Before the discoveries of this
book, one might have thought that to create anything with a significant level
of apparent complexity would necessarily require a procedure which itself had
significant complexity. But what we
have discovered in this book is that in fact there are remarkably simple
programs that produce behavior of great complexity.”* (p. 559)

But of course, the discovery that simple programs can
produce complex behaviors isn’t really new either – it’s pretty much the *raison
d’etre* of the intersdisciplinary pursuit of complexity science, which has
been growing steadily for the last couple decades (see refs in Goertzel, 1994). These are not isolated examples; the book is full of comments like these, which
emanate an attitude of “everything I touch, I make my own.”

There’s really no excuse for all the grandstanding – it’s too bad Wolfram didn’t let someone else red-pencil his manuscript with a critical eye. But in the end, the book is written clearly enough that the author’s egocentric asides don’t distract too much from the comprehensibility of the contents. The ideas speak for themselves, mostly.

So is it a new kind of science? Well, ironically, what Wolfram presents in his book is *indeed*
a new kind of science – ** but** (you knew a “but” was coming!) … but
in spite of his frequent statements to the contrary, it is not one entirely, or
even primarily, of his own invention.
The “new kind of science” he presents is what many of us call
“complexity science” or “complex systems science.” He has made many significant advances in this domain, but not
nearly enough to justify calling him the whole area his own invention!

But one must give credit where credit is due – I have to admit that, among the many speculative complex-systems-related ideas he presents in his book, there are several that would be quite revolutionary in their implications if more careful and extensive investigation proved them true. Validation of a majority of the original hypotheses in this book would certainly cement Wolfram’s reputation as the one of the top all-time contributors to complex systems science.

The book consists of 700+ pages of main text, and 300+ pages of notes. I strongly urge you to read the notes as well as the main text. They contain a lot of interesting random observations, but they also do something that the main text doesn’t do enough of: discuss the relationship of Wolfram’s work with that of other researchers. And not in the manner of dry academic references, but in a more meaningful sense. In the notes, he often explains how his ideas have come out of the general scientific zeitgeist, and how they compare to the ideas of others – and oddly, he does this even in cases where, in the main text, he makes it sound as if the ideas he’s presenting are totally unrelated to anything ever said before.

Now, moving on to the most important thing – the contents of the book – the first thing that must be said is that the book is a huge and diverse one, and a brief review can’t possibly cover all the topics mentioned. (If you want to consider this a lame excuse for the incompleteness of this review, go right ahead!)

Amidst all the diversity, however, Wolfram has three main themes:

- Extremely complex behaviors of all sorts can come out of simple rules
- Just about any kind of “simple rule framework”, if it’s not totally trivial, is capable of giving rise to the full spectrum of complex behaviors ( this is what he calls the Principle of Computational Universality)
- “CA-ish” systems (my term for dynamical systems vaguely resembling cellular automata) are a generally useful model for complex systems of all sorts

Theme #1 is certainly not original to Wolfram, although he often speaks like it is. This is the basic insight of complexity science, as written about in thousands of technical papers and dozens of popular books over the last 15 years or so.

Theme #2 is more interesting, and though it’s not totally without precedent, Wolfram articulates it far more clearly than anyone else has done. He doesn’t come close to proving that his “Principle of Computational Universality” is true (and he explicitly acknowledges this), but he makes an interesting argument. I’ll explore this aspect of his thinking in a fair amount of detail, later on in this review.

Theme #3, finally, is what most vividly distinguishes
Wolfram’s work from that of others in the complex systems field. Wolfram didn’t invent cellular automata but
he did far more with them than any other single researcher has, and he
definitely was the one who “put CA’s on the map.” In this book he goes far beyond traditional CA models, but nearly
all the models he considers are what I would call “CA-ish” – they are minor
modifications of the basic CA framework.
The main reason why he sticks with CA-ish rules, I believe, is that his
modus operandi is the *visual* analysis of complex dynamics. Visual analysis of systems like genetic
algorithms, neural networks, or self-rewriting finite automata is difficult to
do. His predilection for systems whose
evolution through time can be visually monitored restricts him to CA-ish
models. If one accepts the
Computational Universality Principle in the way Wolfram means it, one is not
giving anything up by restricting attention to such systems, because all
complex systems models are basically the same anyway!

To illustrate these themes, Wolfram reviews a host of different application areas, in each one creating some simple CA-ish models that emulate important or interesting phenomena. It’s too much to write about in a single review, even an overly long-winded one -- all I’ll do here is to mention a few points that struck me as particularly interesting.

* *

Probably the clearest breakthrough presented in the book is a purely technical mathematical one: the proof that CA rule 110, a particular cellular automaton rule, is a “universal computer” – able, like a Turing machine or a desktop PC, to simulate any finitely describable behavior. This leads him to a mathematical construction of the simplest known universal computer ever, a very small Turing machine, smaller than the one constructed by Minsky decades ago, which used t hold the record. The proof of universality of CA rule 110, described at a moderate level of detail, is fascinating, involving a mixture of intuitive visual arguments and complex computations. Of course, in the big picture that he’s sketching, it doesn’t really matter whether CA rule 110 is universal or not; what really matters is that there are a lot of simple universal rule-sets. But CA rule 110 is an excellent example, and for connoisseurs of proofs, this one is awfully cool.

* *

In contrast to the striking originality of his approach to CA
rule 110, his treatment of biological form reminded me very much of a book
called* Evolution without Selection*, by a. Lima de Faria (which Wolfram
does not reference, presumably because it’s a fairly obscure book and with so
many disciplines to play in, he only had a limited amount of time for
biological background reading). Lima de
Faria’s point, in his excellent book, is that most forms observed in organisms
can be explained better in terms of self-organizing processes, than in terms of
adaptation to environmental conditions.
He gives on example after another of nearly identical patterns that have
appeared independently in evolutionarily distinct species. Australian fauna provide outstanding
examples of this. Tasmanian tigers
looked a fair bit like African and Asian mid-sized predator mammals, but they
were marsupials; their genetic material had relatively little in common with
their near-lookalikes. And why do
pandas look so much like bears, yet share more genetic material with
raccoons? Why do brain corals twist
around like brains, and why do the fractal branching patterns of our blood
vessels look like the sprawling roots of plants? Not because the details of an animal’s body are selected by
evolution; rather, because the details of epigenesis are governed by complex
dynamical systems that tend to have similar attractors.

Wolfram gives a more mathematical spin to the same idea. Most strikingly, he takes on the frequent occurrence of the “golden ratio” ( (-1 + Ö5)/2) ) in nature. Because the golden ratio occurs in mathematics as the solution to so many optimization problems, evolutionary biologists have assumed that its occurrence in the shapes of plants and animals is a result of evolution solving optimization problems, in the course of adapting organisms to their environments. But Wolfram shows that, in case after case, the golden ratio arises in simple self-organizing systems that model aspects of epigenetic development – without any selection at work at all.

I think that Wolfram and Lima de Faria have excellent points – but my own view is that they may be pushing the pendulum too far in the other direction. (Lima de Faria pushes further than Wolfram.) Traditional neo-Darwinist biology has stressed selection far too much, arguing that one after another particular trait must be optimized by evolution for some particular purpose. But yet, one does not want to underemphasize selection either: it is a powerful force for adapting the various parameters involved in the complex self-organizing epigenetic processes that build organisms from genomes.

The chapter on fluid dynamics includes a lot of interesting pictures, showing how pictures intuitively resembling turbulence, ripples, and other fluidic phenomena can be obtained from very simple rules. I found this treatment of fluid dynamics refreshing, compared to the dizzying complexity of the Navier-Stokes equations, which are the standard approach. Yet, I suspect that experts in fluid dynamics will find his treatment of their specialty area somewhat frustrating, as well as perhaps inspiring. What he does there is to demonstrate that interesting phenomena looking sort of like real fluid dynamics can be achieved using simple CA-ish models. This is fine, but the next step is to try to get these same phenomena to emerge out of more realistic models, in the same sort of way that they come out of the CA-ish models. Because the simple CA-ish models he considers, it seems, are never going to enable us to explain particular fluid dynamical phenomena in any detail. The question is then whether Wolfram’s work can inspire new realistic fluid-dynamic models, that are significantly different from the fluid-dynamical models scientists are already using. After all, the state of the art in fluid dynamics is discrete grid simulations – which are fairly CA-ish in nature, and can be viewed as a more complicated version of what Wolfram is doing.

The chapter on “Processes of Analysis and Perception” was a
big disappointment to me, perhaps because it came so close to my own area of
research, Artificial General Intelligence.
I found *no* really useful insights in this chapter. He shows how some fairly basic pattern
recognition processes can be represented in terms of CA-ish systems, but this
didn’t strike me as terribly interesting: these basic processes can be
represented in many different ways. The
visualizability of CA’s didn’t seem to add much here. Maybe this is just because Wolfram’s intuition in these areas is
weaker than in other domains – or maybe it’s because the CA-ish modeling
framework becomes less useful as one moves away from the physical world and
into the domain of cognition.

Perhaps the most controversial part of the book will be his chapter on physics, in which he speculates on the possibility of a purely space-and-time-discrete foundation for fundamental physics. I have a lot of sympathy with his approach here, although this is definitely the most speculative part of the book (and it’s also a part of the book where the author refrains from grandiose claims to a remarkable degree, perhaps because physics is his original specialty). The graph-rewriting systems he describes here are the least CA-ish systems he considers anywhere in the book: here, more than anywhere else, he seems willing to morph his modeling approach to match the phenomenon under study. It’s not clear to me why he’s willing to do this in the domain of physics, and not, say, cognition and perception. But in any case, the deviation from CA-ish-ness seems successful, and he paints a convincing speculative picture of a future physics in which low-level graph-rewriting dynamics lead to the emergence of space, time, particles and forces at a higher level of granularity. I’m afraid his ideas are too far off from the mainstream of physics to be taken seriously by many contemporary researchers, but I hope they will be inspirational to the next generation.

A less controversial – though by no means uncontroversial –
excursion into physics is his treatment of the Second Law of Thermodynamics. Unlike many thinkers, he views the Second
Law as an approximation only. He very
cleverly constructs “conservative” CA’s, which are fully reversible and hence
are better qualitative models of the laws of physics than ordinary CA’s. He then shows how some of these
conservative, reversible CA’s can violate the Second Law. But he does admit that such violations are
relatively unlikely – which is consistent with the traditional, statistical
interpretation of the Second Law. His
work here hints at a rethinking of the foundations of statistical mechanics,
and I imagine this is one aspect of the book that *will *be taken up by
other thinkers in the coming years.

Finally, the aspect of the book that I personally found most
fascinating was the discussion of *randomness in dynamical systems*. This is another place where he opens up a
fascinating door and then doesn’t look very hard inside the new room he’s
exposed. But at first sight, this
particular room looks tremendously interesting. What Wolfram’s CA simulations hint at here is a new approach to
pseudo-randomness in complex dynamical systems.

Having worked with “chaos theory” for years, I – like all other dynamical systems researchers -- have become relatively accustomed to the idea that apparent randomness in a dynamical system is generally tied to “sensitive dependence on initial conditions.” Chaotic systems look quasi-random because small changes blow up into huge ones over time. A butterfly’s wing-flap in Australia causes a hurricane in the North Atlantic to veer off course and hit New York instead of Newfoundland.

Wolfram’s insight that this is not necessarily the case –
that quasi-randomness can occur in simple dynamical systems that do not display
any sensitive dependence upon initial conditions. This is a deep and important idea, which I will be working to
incorporate into my own work. It’s
worth noting, however, the manner in which I plan to incorporate his insight
into my research, which may be indicative of how other researchers will react
to Wolfram’s work. My reaction will
certainly *not* be to give up working with the complex systems models that
I use, and start working with easily visualizable CA-ish systems instead. Rather, my reaction will be to carefully
study the models that I’m already working with, which are natural in the domain
of AI, and see whether the phenomenon that Wolfram has observed really occurs
there. It’s possible to measure the
degree of sensitive dependence on initial conditions, and the degree of
statistical randomness, in nearly any complex systems model – and if Wolfram is
right, then these two measurements will not be all that strongly
correlated. This will be a fascinating
sort of experiment to run with my own complex systems models, and I am grateful
to Stephen Wolfram for putting the idea in my head!

Every chapter of the book contains some interesting insights; the ones I’ve mentioned above are just a sampler. But the punchline of this massive masterwork comes at the end. By far the most original and exciting idea in Wolfram’s book is the Principle of Computational Universality, given in the last chapter. If true, this principle gives his focus on CA-ish systems a whole different slant than one would normally place on it. From the standard complexity-science perspective, a book on CA-ish models is only exploring a small portion of “complex systems space.” But if you buy computational universality as Wolfram intends it, then all the different models in complex systems space are basically the same, so exploring the CA-ish subspace is all that needs to be done.

In thinking about this very subtle and fascinating issue, I
have come to feel it’s important to distinguish two variants of the
Computational Universality principle.
Wolfram does *not* make this distinction clearly in his book, and I
think this is unfortunate. In framing
these two variants I will use the term “complex systems model classes”. By this I mean broad mathematical frameworks
for modeling complex systems, for example: cellular automata, neural networks,
automata networks, rewriting systems,…. I will also use the term “complex
behaviors,” by which I mean, in Wolfram’s style, behaviors that are neither
merely repetitive nor merely random.

My two variants are as follows:

**Strong computational universality**: All complex systems model classes that lead to nontrivial behaviors, lead to all possible complex behaviors, using reasonable space and time resources

**Weak computational universality**: All complex systems model classes that lead to nontrivial behaviors, lead to all possible complex behaviors – but each model class only leads to a certain class of possible complex behaviors, under the restriction to use reasonable space and time resources

Neither of these has been proved to be true, and in fact, giving a sensible rigorous mathematical formalization for either of them would be a serious undertaking (a fact which Wolfram acknowledges, and attributes to the newness of the type of science he is doing).

Theme #3 of Wolfram’s book – that CA-ish systems are adequate for complex systems modeling in general – hinges on the strong computational universality principle. If only the weak computational universality principle is true, then it may be the case that some complex systems behaviors, while possible to simulate on CA-ish systems, are pragmatically simulable only via fundamentally different sorts of complex systems models.

Of course, the nasty little “efficiency” issue that separates my two variants of Wolfram’s principle, is also the biggest thorn in the side of the theory of universal computation (a standard part of theoretical computer science since Turing). Computation theory shows that any "universal computer" can simulate any other computer, by running an appropriate simulation program. It shows that some very simple computer programs are universal. But the catch is that, in practice, the simulation program may be unacceptably inefficient – the simulation of machine A, running on machine B, may run 100000000 times slower machine A itself, or may use 10000000000 times as much memory.

So, when Wolfram says that a given system – say, CA rule 110
– is “universal,” what this means is that it is *in principle capable* of
modeling the universe, or the brain, or biological evolution, or whatever – *if*
it is given a huge amount of memory and processing power (i.e. a huge initial
condition and a huge number of iterations to run). It does not mean that CA rule 110 is in any way *pragmatically
effective* for modeling all these systems.

If you have another modeling framework that you think is better than CA rule 110 for modeling, say, the brain or the universe -- then you are guaranteed that your whole modeling framework can be simulated using CA rule 110. But you’re not guaranteed that the CA rule 110 implementation of your modeling framework will fit into the observable universe, or run to the point of giving any meaningful result before the Big Crunch destroys us all.

But, even if the computational efficiency issues are resolved, and the Strong Computational Universality Principle is proved true, Wolfram’s Theme #3 about the universal utility of CA-ish systems in complex systems modeling may not hold up. It seems to me that Theme #3 of the book hinges on something even stronger, which I’ll call the

**CA-ish Universal Visualizability Principle**: Not only are CA-ish systems adequate for modeling all types of complex systems with reasonable space and time efficiency, but also, all interesting complex systems phenomena can be observed visually in CA-ish systems, by the human eye.

Because, even if it’s true that any complex system can be modeled using a CA-ish system in a computationally practical way – if the result is a CA-ish system where the most interesting stuff going on is not pragmatically visualizable, then who cares? Why bother accepting any performance penalty at all to use a CA-ish system instead of some other kind of model? Wolfram’s entire methodology seems to rely on identifying interesting complex behaviors by visually inspecting system trajectories. The big advantage of CA-ish systems over other sorts of complex systems models, so far as I can tell, is that they are easily visualizable – but so far, he has not visualized any really deep biology- or cognition- or fundamental-physics related patterns in his panoply of CA-ish systems.

Ray Kurzweil (2002), in his review of Wolfram’s book, makes a similar point. He notes that the “complex” pictures shown in the book are really quite repetitive in appearance, compared to the complex phenomena seen in the real world.

I don’t know how Wolfram would respond to this complaint,
but one response he could reasonably give would be: "Yes, but a
sufficiently large CA-ish picture *would* have patterns of that complexity
and variety in it.”

To this, I can think of two good answers, corresponding to the Strong Universality Principle and the CA-ish Universal Visualizability Principle:

**The efficiency complaint**. "So what? If it takes Class 4 CA's with initial conditions of length 10^{50}and runtimes of 10^{100}generations to emulate the brain, what use is the emulation?"**The nonvisualizability complaint.**“Yes, but will the meaningful patterns in these huge pictures be visually discernable? Suppose you used CA rule 110 to implement a human-level AI system, would its thoughts correspond to comprehensible visual patterns? Why do you think they would? And if the important patterns aren’t visually discernable, then what makes CA-ish systems better than other complex systems modeling frameworks?”

To summarize my lengthy complaint about his universality principle: It’s not enough to argue that in principle CA-ish systems can model everything. One has to explain why this CA-ish-system-focused modeling approach is useful in practice -- as opposed the approach of introducing, well, all the rest of the field of complexity science that’s been developing over the last couple decades, in parallel with Wolfram’s own interesting work.

For instance, we already know that neural-net type models
are useful for modeling some biological and cognitive processes. Dozens of books and thousands of papers have
been written studying the self-organizing dynamics of neural network models –
complex behaviors ensuing from simple rules.
What new things does Wolfram’s approach have to offer researchers
interested in modeling brain and mind?
Not the idea of getting complex behaviors from simple rules – this idea
is already well-understood. A better
modeling framework, one that will lead to simpler models and more interesting
hypotheses? Maybe, but this is
certainly not demonstrated in *A New Kind of Science*. The results he gives in the “Processes of
Perception and Analysis” chapter don’t hold a candle to the achievements of the
better neural net researchers (see e.g. Amit, 1989). The rules Amit works with are not quite as simple as Wolfram’s
rules, but the results, in the domain of perception and analysis, are
tremendously more significant.

Wolfram’s Principle of Computational Universality does
contain a very deep insight, one going beyond standard universal computation
theory, which is: ** Almost any dynamical system that doesn't lead to random
or transparently fixed or oscillatory behavior, is likely to be a universal
computer. **This is a fascinating
statement, though I'm not yet 100% convinced it's true. But even if it is true, the consequences may
not be as dramatic as Wolfram suggests.
It may be that each of these theoretically-universal dynamical systems
is going to lead to different behaviors

OK – now I’m going to blatantly use this “book review” as an
excuse to vent some of my own ideas about complexity science and how it should
be turned into a “real science.” But
don’t worry, this still has *something *to do with Wolfram. It just has less to do with what he *did *say
in his book, and more to do with what I *wish *he’d said but he actually
didn’t.

Wolfram, with his Computational Universality Principle, is taking a stab at laying the foundations for a “unified theory of complexity science.” It’s a fascinating and valiant effort. But – here we go! -- I’m not sure that his effort represents a proper understanding of what a “unified complexity science theory” should be.

Personally, I doubt that CA's are going to emerge as the
core modeling tool of the science of complex systems. Rather, I think we'll continue to see a diversity of modeling
tools. And I suspect that the science of complex systems -- when it finally
emerges as a robust science rather than just a philosophy together with a
diverse collection of specialized tools and examples -- will focus on the
formalized study of *emergent system patterns* rather than focusing on one
specific class of dynamical systems for generating such patterns.

I also doubt that Wolfram's MO of visually identifying complexity via 2D patterns is going to take him to a real complexity-based physics theory, or a complexity-based theory of mind-brain, evolution or even fluid dynamics (to name just a few of of the disciplines he touches in his book). For that I think some kind of math/science that focuses on emergent patterns rather than generating iterations will be necessary.

And this brings us to a point that I consider highly
critical. It strikes me that the big
thing missing in Wolfram's “grand unified theory of complex systems” is a
systematic and mathematical way of analyzing, interrelating and synthesizing *emergent
patterns*. He plays with iterations and identifies emergent patterns
visually, then proves some things about them in special cases, or draws
analogies between what he sees and various real-world systems. But until we
have some real science & math about the emergent patterns in these complex
systems -- even if there is a small collection of CA-ish rules giving rise to
the universe as we know it, we may well never be able to find this collection. After all, even for "relatively
small" CA or CA-ish rules, the search space is very very large.

The big question is: How do we go backwards from the desired emergent patterns to the underlying rules? Wolfram gives us no guidance in this direction. He uses his own intuition – this is what he’s been doing for 20+ years. But one can only get so far by clever intuition and visual inspection. Until we have some science regarding this process, we don’t really have a complex systems science.

My guess is that, if we're going to have a real science of complexity, we need either

a) a solid math theory telling us what kinds of rules and initial conditions will give rise to what emergent patterns (and we're very far from having such a thing, at this point), OR

b) an AI system that can do far better than humans or existing heuristic search techniques, at searching the space of possible dynamic rules and initial conditions in a systematic and intelligent way

I had hoped that, somewhere in his book, Wolfram would provide a), but he didn’t. Nor has anyone else, so far. Now, achieving a) or b), in my view, would really be a Newton-level achievement. My own work has focused on b) (see Goertzel & Pennachin, 2002; or www.realai.net).

Wolfram does deal with this point, in a limited sense. He gives examples of specific constraints on dynamical systems, and then shows that through exhaustive search one can often find CA-ish systems that obey these constraints. From the difficulty of doing this kind of search, he concludes that finding systems that satisfy constraints is not a very good idea; that instead one should start out with simple rules and see what they lead to.

He has a good point: searching for systems that satisfy
specific constraints is both “unnatural” and very hard. But as he himself notes, biological
evolution doesn’t work by forcing organisms to satisfy highly specific
constraints in order to survive.
Rather, it demands that they give rise to emergent patterns displaying
certain broad qualities. He does not
report any experiences trying to automatically find simple rule-sets giving
rise to important *fuzzily-defined emergent patterns*. Yet, is this not really the crux of complex
systems science? Seeing a certain
real-world phenomenon, or desiring a certain engineered phenomenon, and trying
to understand what simple rules could give rise to it – first in general, then
in progressively more and more detail?
If this problem had been solved for CA-ish systems, then everyone would
drop their current modeling approaches and start using CA-ish systems straight
away! This “inverse problem”, with
fuzzy emergent patterns rather than precise formulas as constraints -- this is
the **hard problem** of complex systems science, and Wolfram has not touched
it, not even as much as some others have done, such as Jim Crutchfield with his
“computational mechanics” work (see references for a list of publications).

(Okay, time to go back to pretending this is an actual book review….)

Is the book worth reading? Definitely. Every scientist or scientifically-savvy layperson should buy it, and every reader will be amazed at how easy it is to get through the tremendous mass of pages and ideas.

What can you expect to get out of the book? At very least, a broad education in a lot of interesting complexity-science more. Whether you get more than this from it, really depends on your particular areas of interest, as the book surveys many different disciplinary areas, and the quality and depth of coverage is highly uneven.

As my detailed comments above should have made clear (unless
you’re brain-dead, in which case I suggest you don’t bother reading Wolfram’s
book anyway), Wolfram's "new kind of science" is not that new after
all. His "new kind of
science" is none other than the science of complex systems, which has been
growing for some time, fueled largely by advancing computer hardware, and by
software innovations like Wolfram’s *Mathematica*. It's very exciting stuff. Wolfram has given us a funky new angle on
some familiar complexity-science ideas, and introduced some really interesting
technical and conceptual insights (a few of which I’ve reviewed above), but
there is no new revolution anywhere in these 1000+ pages. In short, Wolfram's work forms an important
part of this emerging new-kind-of-science -- but not quite as important a part
of it as he seems to think it does.
Read it and enjoy it anyway!

Amit, Daniel (1989).
*Modeling Brain Function: The World of Attractor Neural Networks. *Cambridge University Press.

Crutchfield, Jim. List of publications on computational mechanics: http://www.santafe.edu/projects/CompMech/papers/CompMechCommun.html

Goertzel, Ben (1993). *The Evolving Mind. *Gordon and Breach

Goertzel, Ben (1994). *Chaotic Logic. *Plenum
Press.

Goertzel, Ben and Cassio Pennachin (2002). Design for a Digital Mind: The Novamente Artificial General Intelligence System. In preparation.

Kurzweil, Ray (2002). Reflections on Stephen Wolfram’s “A New Kind of Science,” http://www.kurzweilai.net/meme/frame.html?main=/articles/art0464.html?

Wolfram, Stephen (2002).
*A New Kind of Science*.
Wolfram Science