Computer Science Dept, College of Staten Island
Electronic commerce (e-commerce) is a rapidly growing segment of the economy, which is expected to increase yet more rapidly over the next few years (Varian, 1996). Current internet commercial transactions are estimated at hundreds of millions and are projected for billions within the decade. The growth of net transacted revenues will be energized with Visa and Mastercard's release of secure software standards for their cardmembers' internet transactions, and the acceptance of standards for micropricing. All in all, it is reasonable to project that, in a decade's time, the vast majority of economic transactions will involve a significant electronic component. However, traditional economic concepts give us fairly poor guidance in understanding this emerging economic domain. A new conceptual framework is required.
We believe that an appropriate conceptual framework is to be found in the emerging, interdisciplinary science of complex systems (Goertzel, 1997; Weisbuch, 1991) and with this in mind, we sketch here a model of Internet commerce based on dynamical systems theory and the theory of autonomous agents. Among our main conclusion, in practical terms, are that in the emerging information-based economy, a major role will be played by
artificially intelligent economic agents (AI)
tools for guiding intuitive human exploration of market dynamics (IA, intelligence augmentation)
As we move from an economy focussed on exchange of goods to an economy focussed on communication of information, the ability to elicit information from complex, chaotic market dynamics via statistical pattern recognition will assume premium value.
We begin with a discussion of the software market and the particular economic efficiences provided by globally networked communication. We integrate this information into an abstract model of information space, in which economic transactions are modelled as agent interactions taking place on a graph of store/processor nodes. Then we turn to the dynamics of economies in general, summarizing an emerging body of evidence which suggests that economic dynamics is self-organizing and chaotic, with complex, overlying emergent patterns.
Putting these ideas together, we note that the increased communication provided by the Internet is highly likely to make the electronic economy more pronouncedly complex than the traditional economy. This suggests that, in the context of e-commerce, the ability to manipulate information regarding markets and their dynamics will be even more important than it is today. Artificial intelligence and powerful data-mining and data visualization tools will allow human and computational agents to successfully navigate this new, newly complex and dynamic economy.
The emerging Internet economy is a complex beast, with many different facets. All the features of traditional economies are present, but with a different slant, caused by the increased role of communication and the increased ease of information collections.
First it should be noted that we are concerned here with economics within the Internet, rather than with the economics of the Internet itself. That is, we consider the Internet as a free-information substrate within which free and commercial interactions take place. We will not consider the costs of information transmission across networks, nor the costs of individual users' internet access. In practice, the entry cost for Internet users is variable over regular annual periods, and fixed over hardware and software investments; but this can be safely ignored if one's concern is economic transactions occurring within the Net itself. If the Internet as a whole were to move to a cost model in which an individual paid for a message based on the cost of physically transmitting that message, then our model of Net commerce would need to be revised somewhat; but this outcome is extremely unlikely as it goes against the packet-switching methodology that drives the Internet on a hardware/software level.
Internet commerce takes a variety of forms. For instance, the commercial Internet provides consumer purchasing access to physical products through catalogs and catalog-derivative inventory and fullfillment systems. It also provides access to information processing and data warehousing (or sometimes ``data mart'') products. These economies are sometimes contained within the Internet, and sometimes extend beyond the electronic realm and into the physical world. In short, one can use the Net to buy products of all sorts, to buy data, and to buy computational processes that act on one's data.
Some Internet services which are currently free are expected to become commercial in the near future. For instance, Web search engines, electronic magazines and Web classified ad sites are now free services, but with the advent of ``micropricing,'' it will become practical to charge users prices as low as a thousandth of a cent for individual transactions such as accessing Websites or performing searchings. The software for micropricing already exists (Glassmann et al, 1995), it has merely to become widely accepted. The transformation of currently free services into micropriced services can be expected to lead to a substantial increase in the quality of services offered; and it may well also result in the transformation of some currently commercial services, such as Net access itself, to free services.
The Internet offers an unparalleled mechanism for market efficiency. No longer will a consumer have to ``shop around'' in tedious or time-consuming fashion; instead there will be services that list the prices available for comparable items from different vendors. An initial experiment in this regard is PriceWeb (Zabeh, 1995), which is a Website listing prices offered for computer peripherals by various hardware vendors. If PriceWeb becomes commonly used it will have a tremendous impact on the way peripherals are purchased. The effect will be to push vendors into charging a single, uniform price, with deviations from the price only being possible if the vendor offers extra services of significant and obvious value.
Not all commodities are as standardized as computer peripherals, but the basic idea of PriceWeb can obviously be extended throughout the Internet economy in general. One will have artificial and human agents offering goods at various prices; other agents monitoring the prices being offered by various agents and posting this information to databases available for free or at a price; and other agents using the posted information to purchase from or negotiate with the selling agents.
In spite of advancing commercialization, however, the economy for free information is not likely to disappear. Free transactions form the backbone of the Internet currently and will surely remain, both in the academic and hobbyist communities, and within the commercial Net community as a necessary tool for attracting interest in goods and services. Free transactions take on the role of advertising in non interactive communications, and extend the simple relay of image and brand with in depth information for various classes of users.
The markets for free information can be described in single classes of consumers responding to a single class of competitive producers, but this restricts consideration to the exclusion of the internet's primary strength: interactivity. Interactivity between consumers and producers enables producers to respond to users' interests in differentiating their products. Inasmuch as this is the primary decision-information retrieval scenario, producers and consumers are no longer effectively classed too broadly.
In order to fully appreciate the nature of electronic commerce, one requires a concrete model of Net information space and the transactions occuring therein. This model must bridge the gap between the abstract structure of economic intercourse and the particularities of Internet information management and communication. Toward this end we present here an agent-data-event model of information space, which treats commercial and other transactions as discrete messages carried by packets on a communication network. Engineering specifics are avoided, but the asynchronous nature of Internet communication is accounted for, as is the heterogeneity of the population of computational processes comprising the Net.
The key concepts of our model of information space are agents and nodes. Mathematically, nodes may be understood as vertices in a directed graph. Both the vertices and the edges of this graph are ``labeled'' with information: the vertices with static information (data) or dynamic information (computational processes), and the edges with communication properties (such as bandwidth). Agents are understood as entities which reside at one or more nodes, process the information stored at various nodes, and communicate with other agents.
The distinction between an agent and a computational process resident at a node is fuzzy rather than crisp, but is nevertheless important (much like the distinction between ``living'' and ``nonliving'' matter in biology). An ``agent'' is, in essence, a computational process that carries out computations distributed over many nodes, and that is largely concerned with interacting with other agents.
In line with these distinctions, dynamics in information space is understood to consist of
computation (which may be carried out by processes resident at a node, within an agent, by agents interacting with nodes, or by complex inter-agent transactions)
Also, it is perhaps worth noting that the ``agents'' in our model may be either human or artificial; this distinction will not be an important one here, as we will seek a framework that allows us analyze characterize human and artificial agents in a uniform way, as parts of an overall matrix of agent-event determination. The distinction between human and artificial agents is a fuzzy one anyway, as all artificial agents are at present designed by humans and hence reflect human goals and biases; and all human interactions with the Internet occur by the medium of artificial computing ``agents'' such as Web browsers and search engines, which increase in their intelligence and autonomy each year. Rather than distinguishing between human and artificial agents, a better approach would be to say that each agent is characterized by a certain amount of ``human-directedness''; but for the present, this concept will not play a role in our considerations either. An agent is considered to be defined behaviorally: in terms of the activities that it carries out at each time, based on the information it has received in the past.
Agents do things, they are the nexus of intelligence in information space; nodes, on the other hand, are static and dynamic data resources. Network data is information available via network activity. We call a static resource a store node and a dynamic resource a processing node; some nodes can be both store nodes and processing nodes. An agent retrieves information by calling on a node. Recursive retrieval occurs when a processing node calls on other nodes for processing or storage retrieval.
Next, events are information transactions, carried out by agents or by processes resident at individual nodes. When an event has an effect on data form or content we say that it is a transformative event, and that the agent effecting the event is an actor. When an event has no effect on the state of network information except to provide an agent with a local copy of some data, then the agent has effected a retrieval event. Transformative events have effects on network data subsequently available as processor or agent events, and so we may say that agents behave as distributed processors in their effects on nodes (remembering the fuzziness of the distinction between node-resident processing and agent action).
Examples of events propagated by agents are replies to email or additions of hyperlinks in Web pages. Much of the activity carried out by artificial agents is currently retrieval-oriented, but as the Internet develops, transformative events will become more and more common, and the distinction between the two types of events may ultimately vanish. For instance, to an intelligent search engine, each query represents a piece of information regarding the humanly- perceived structure of the Web, and thus potentially leads to modifications in the search engine database; a prototypical retrieval event, searching for a Web page, then becomes a transformative event as well.
The dynamics of a system of agents acting on a graph of static and dynamics nodes will be extremely complex. In the following section we will review evidence for chaotic and otherwise complex dynamical behavior in certain economic systems: the evidence, as will become clear, very strongly suggests that the Net economy will give rise to structured chaos, and complex emergent dynamical patterns of various kinds. Making use of these patterns becomes an essential aspect of doing business.
For example, a producer's data nodes are organized into an interactive web which classifies users by interests, but cannot be seen as exclusively classifying users except in the simplest case. Agents who explore one node web will often explore others and so are classifiable with many interest properties. This classification constitutes a macroscopic perspective on the product's or products' differentiation from others' for the user -- while the user has similarly explored other products with purpose or feature correlations. In the information space economy, there is an enhanced ability for producers to identify small niche markets, and consequently, an increased necessity for them to do so, using the vast amount of consumer behavior data at their disposal, and the most sophisticated available categorization techniques. Perceiving patterns amidst the complexity and chaos of the electronic market becomes imperative.
Next, along similar lines, it is interesting to observe that the dynamics of a network of agents in information space is essentially nondeterministic, in the sense that there is no way to determine the overall state of the system of agents at any given time. One might think that, since we are dealing with digital systems at this point in the evolution of computer technology, the dynamics of an Internet agent system could be understood by experimentation -- by manipulating an active network of data, involving continuous agent activity and continuous external modification of data. Such experimentation would require the collection of a periodic global state image, however, and the collection of this state image, in practice, would accidently change the definition of the network by interfering with system operations (much as, in quantum physics, measuring a system inevitably disturbs its state). Halting the network of agents to measure the overall state is not a viable option either -- this is difficult without building a system that synchronizes all activity; but synchronization must not be used in agent processing activity for communication or retrieval if the definition of the network is to remain the asynchronous, packet-switching network (while the asynchronous network employs lazy synchronization in the form of collection or set ordering for message enacapsulation over packets, many such connections exist simultaneously which implies that node reading, processing and writing operations (effects) occur in random order relative to one another). In short, the very nature of the Internet as reflected in our information space model makes global state measurement difficult, and results in a system that is in practice globally nondeterministic.
What we have, then, is a complex system of agents passing messages around a graph of nodes, interacting with the computational processes and the static data stored at nodes, and giving rise to dynamical patterns that can be understood only partially, not on the global level. Understanding and utilizing these subtle dynamical patterns becomes an essential part of business practice.
One important and interesting aspect of e-commerce is the software economy -- the economics of the ``computational processes'' resident at particular nodes in information space. Software economies are both free and priced. Currently emerging pricing structures include the common shrink-wrap model, and the usage model.
In the shrink-wrap model, a user pays a one time license fee for free usage. This market for software is defined with one class of users and one producer. Multiple producers' products may be interoperable, codependent or competitive. For example, a word processor comonly requires a particular operating system to run, so the user must have the operating system before using the word processor. On the other hand, if the desktop operating system provides special interoperability characteristics for the word processor software, then the dependence of the word processor on the operating system is enhanced by interoperabilty features which may have independent utility for the user.
In the usage model, on the other hand, users pay for a particular transaction which employs software and computing resources of time and memory space. Computing resources have opportunity cost across users. The usage model prices in economies that are at least predominantly internet contained. The usage model also serves a variety of users from disjoint utilities. A consumer and a worker requiring search engine processing time may have very different demands as the worker's time has opportunity or compensation costs for the firm. With the advent of Java applets and network-based computing environments, it is anticipated that the usage model will increasingly supplant the shrink-wrap model, potentially until the latter becomes obsolete.
A notable aspect of the software economy is the high cost of producing the first unit of a product, as compared to the minimal cost of producing subsequent copies. This is an exaggeration of the economics of the traditional publishing industry (e.g. it is estimated that about 70% of the cost of producing an academic journal goes into the cost of producing the first issue). What this implies is that there is no easy way of determining the appropriate price for a software product; pricing strategies depend in a very fundamental way of estimates of future demand. Often the ``beta'' version of a software product is given away for free, to whet demand. Also, differential pricing is common, with inferior versions of a product offered for lower prices, even though there would be no additional cost in providing everyone with the fully featured software. Hal Varian (1996) has studied these issues in a thoughtful way, and presented economic mechanisms which provide for effective differential pricing in an e-commerce context.
The combination of differential pricing with usage-based pricing may lead to a new, complex and intriguing software market in the near future. Consider a situation in which, instead of purchasing a large shrink-wrapped program, each user dynamically assembles software according to their needs, from components (applets) available on the Net. The utility of a given applet to a given consumer will depend on the particular configuration of other applets within which they want to use it.
One then has a situation where agents are required in order to assemble computational (node-resident) processes. Each user will send out an agent to find the applets that he needs in a given context. The agent will survey the different applets and assess how well each one will fit the user's particular needs. Given its assessment of the utility of each applet, it will place bids, which the selling agents will then accept or reject based on their own predictively motivated differential pricing strategy. The selling agents will alter their pricing scheme based on perceived trends in demand, and, potentially, buying agents may alter their purchasing patterns in ways determined to have particular effects on selling agents' pricing schemes.
We have said that the information economy will be complex and chaotic and nondeterministic, and we have suggested how this complexity might manifest itself in the software market of the near future. Now let us look at the issues from a more general perspective. What, fundamentally, is the difference between an economy based on agents in information space and an economy based in the ``real world''?
The difference, we suggest, is both quantitative and qualitative. The basic quantitative difference is the ease of communication, of information dissemination.In the information space economy, each agent is potentially ``directly connected'' to each other agent, and the fixed cost of exchanging information is very low. Obviously the ease of communication between economic agents has been increasing steadily through history, with the gradual introduction of superior communication and transportation technology. But with the advent of Internet commerce, we are seeing the largest and most sudden change ever in ease of communication of economic information. This change, it seems highly likely, is going to push the global economy into a new dynamical regime -- a new mode of being. The quantitative difference in ease of communication is leading to a qualitative change in focus -- from a focus on goods and their exchange to a focus on information and its communication.
Exchange of real-world goods is still an important part of e-commerce, along with exchange of information goods (software programs, databases, etc.); but what is new is that, in an information-space economy, every economic agent must continually be acutely aware of the information base from which its decisions emanate, and the effect that its actions have on the information bases of other agents in the market. When economic agents spend as much time ``thinking'' about the informational causes and effects of their actions, as about the actions themselves, then we have moved into a different order of economy -- an information-based economy rather than a goods-based economy. The fuzziness of this distinction should not be glossed over -- all economies involve both information exchange and goods exchange -- but the point is that the change in emphasis is likely to be sudden rather than gradual.
We have discussed some of the particularities of electronic commerce, and the general structure and dynamics of information space; now we will take a step back and review some general properties of economies, which happen to be particularly relevant to the dynamics of heterogeneous economic Internet agents.
Economic systems, ordinary or digital, are far more complex that traditional economic theory would like to admit. Most notably, the notion of an ``invisible hand'' which drives prices toward an equilibrium balance of supply and demand does not bear up to scrutiny, either mathematically or empirically. The reality seems to be that prices fluctuate according to noisy chaotic dynamics, with overlying long-range statistical predictability, and with perhaps some ``windows'' of unusually high short-term predictability. In short, economic systems are no more and no less predictable or equilibrium-oriented than other complex, self-organizing systems -- e.g. brains, ecosystems, or the Internet as a whole.
This conclusion is particularly important for electronic commerce, as it tells us what kind of dynamics to expect for the emerging Internet economy. It tells us that e-commerce agents, in order to be successful, will have to be skilled in the art of statistical pattern-recognition -- will have to be capable of recognizing and acting on short-, medium- and long-term trends in noisy, statistically patterned chaotic systems. In the real economy, the noisy, patterned chaos of price dynamics is dealt with by human intuition. In the Internet economy, on the other hand, things will happen too fast and in too distributed a way for human intuition to intervene every time a decision has to be made. The complexity of economic dynamics thus leads directly to the need for /it intelligent e-commerce agents.
As shown in detail by (Saari, 1994), the chaotic nature of economic dynamics can be derived directly from basic price theory. A brief review of Saari's arguments may be useful here. Consider, for simplicity, an economy without production, consisting of agents exchanging n types of goods according to positive prices. Let pj be the price per unit of the jth commodity; then the cost of xj > 0 units is pj xj. The price of a commodity bundle x = (x1, ... , xn) is given by the inner product (p, x), where the vector p represent the prices of all commodities. In the absence of production, what the kth agent can afford is based on what he can sell-- his "initial endowment" wk.
How does each agent determine what to buy, in this simplified set-up, given a particular price vector? There is no natural ordering on n-dimensional space, so, following typical practice in economic theory, we may impose one by defining a "utility function" uk: Rn+ --> R for each agent, defined so that uk( y) > uk( x) means the k'th agent prefers bundle y to x. For convenience, Saari assumes that utility functions are strictly convex, and that no good is undesirable to any agent, i.e. more of each good is better than less from each agent's perspective. These assumptions are not realistic, but we are showing that economic dynamics are chaotic; and it is plain from the mathematics that lifting these restrictions will not make the dynamics any less chaotic.
Within this framework, one can use traditional Lagrange multiplier methods to determine the k'th agent's demand at price p, and the k'th agent's "excess demand" xk( p), being the difference between what is demanded and what is supplied (wk). One can then construct the "aggregate excess demand function" X(p) as the sum over k of the Xk( p), and study its properties, called Walras' laws.
It is not hard to show that equilibria do exist in this model economy; i.e., there is a price vector p* for which the excess demand x(p*) = 0, and supply equals demand. But do prices tend toward these equilibria? In the classical picture, increased demand leads to increased prices, and so we have, for discrete- time dynamics,
In fact, Saari's conclusions are even more dire. He notes that the application of dynamical systems theory to economic equations `` not only causes worry about the invisible hand story, but it forces us to question those tacit assumptions--assumptions basic to several tools from economics--about how the aggregate excess demand function for one commodity set relates to that of others. One might argue (and this is a common reaction during a colloquium lecture, particularly in a department of economics) that there may exist conditions imposing strong relationships. Yes, but it is obvious from the theorem that such constraints cannot be based upon the aggregate excess demand function (as is a common practice). Instead they appear to require imposing unrealistically harsh global restrictions on the agents' preferences....''
The basic point is that individual rationality on the part of individual economic agents does not necessarily lead to simple, orderly, convergent and transparently ``rational'' behavior on the market-wide level. A community of agents, each acting to fulfill their own utility functions, can lead to all manner of complex or chaotic dynamics. If one adds yet more realism, and makes the utility functions of the agents vary based on the agents' predictions of the future behaviors of the other agents, then things become yet more complex. The overwhelming point is that economic theory should be a theory of intelligent though not perfectly rational agents, navigating their way through a very complex dynamical landscape -- not moving toward or arriving at equilibria, but rather sculpting the chaos in which they live into temporarily beneficial forms.
One may well wonder how this conclusion, derived through mathematical theory, is borne out in by the detailed analysis of economic time-series data. The answer here is that real economic data is even less ``orderly'' than mathematical chaos. The incredible complexity of economic systems, combined with the relatively short time series involved, makes convincing data analysis difficult, but the clear message from the work that has been done is that economies are complex and predictable only with great intelligence.
The case for chaos in economic time series is still ambiguous at this point (leBaron, 1994), but the problem is not one of equilibrium-seeking versus chaos, rather one of chaos versus pseudorandom behavior generated by exogenous shocks. A real economy receives with a steady stream of new products and new information, as well as intensive influence from politics, mass psychology and so forth, and these factors combine to jolt economic time series away from easily- identifiable chaotic attractors. Much as with the analysis of time series in behavioral science, one finds that it is more productive to seek statistically predictive models than to seek to identify attractors. The disentanglement of chaos from external ``noise'' in extremely complex systems like minds and economies is a formidable problem, and, ultimately, a problem of little practical use. The key point is that there is no simple structure, such as that posited by classical economic theory; instead there is a complex dynamical structure which can only be appreciated by methods of statistical pattern recognition.
This conclusion has, in essence, been realized by observers of the financial world for a long time. For instance, ``technical trading rules,'' which were once dismissed as useless, are now being accepted by economic theorists, and appear to be connected with the chaotic nature of financial time series (leBaron, 1994). Many technical trading rules have to do with moving averages; they recommend that a trader buy when the price is above a long-term moving average, and to sell when the price is below (Levich, 1994). The particular form of the average is the subject of much scrutiny, as is the dependence of the urgency of selling on the difference of the price from the average. This is exactly the type of rule that one would expect to be useful in dealing with a system that displays ``structured chaos'' -- chaos with overlying statistical patterns. Technical trading rules are nothing but statistical prediction schemes, which ignore low-level chaos and noise, and focus on high-level regularities. In particular, rules involving buying when prices are above a long-range moving average are tied in with the ``fractional Brownian noise'' model of time series, a particular variety of statistically structured chaos.
Recognizing the complex, chaotic nature of economic markets, a number of researchers have thought to simulate economic dynamics with stripped-down ``complex systems'' models. For instance, David Lane and Roberta Vescovini, in a paper called ``When Optimization Isn't Optimal' (1995),report on simulations in which agents choose between one product and another over and over gain, on a series of trials, the decision at each time being based on information gathered from agents who have previously adopted the different products. The big result here is that following the optimal strategy suggested by probability theory -- the ``rational'' strategy -- is not actually maximally effective. What is optimal in terms of probability theory, applied to a single agent making decisions in a fixed environment, is not optimal in an environment consisting of other agents who are also making their own adaptive decisions. They find, furthermore, that in some cases an increased access to product information can lead to a decreased market share for the superior products.
Along similar lines, Michael Youssefmir and Bernardo Huberman (1995) have experimented with large distributed multiagent systems, involving agents that continually switch their strategies in order to find optimal utility. They report that they ``have analyzed the fluctuations around equilibrium that arise from strategy switching and discovered the existence of a new phenomenon. It consists of the appearance of sudden bursts of activity that punctuate the fixed point, and is due to an effective random walk consistent with overall stability. This clustered volatility is followed by relaxation to the fixed point but with different strategy mixes from the previous one.'' In their particular system, then, the notion of equilibrium is a reasonable approximation to reality, but there are many different price vectors that give approximate equilibrium. One near-equilibrium is reached, only to be disturbed by a bout of random fluctuation that disrupts the system temporarily. Things then return to a different near-equilibrium, and then start up again....
These simulations, like the mathematical calculations of Saari and Sonnenschein, involve approximations to economic reality. However, the phenomena that they describe have an undeniably ``real-world'' feel to them. These simulations capture some of the fascinating richness of real economic dynamics. And they are particularly interesting from the point of view of e-commerce, because, after all, electronic commerce will also involve the interaction of computational agents. One expects that the dynamics of e-commerce will resemble these simulations as much as it resembles the dynamics of current markets.
The nature of Internet commerce is that information exchange becomes as important as or more important than goods exchange. This feature, it seems clear, can only intensify the complex, chaotic nature of economic interaction. It makes it easier for a small effect to propagate widely. It makes changes happen more quickly, with less time for intuition to acclimate to the new circumstances. It makes markets more efficient, but it also makes more complex differential pricing schemes viable; and because it supports products with high initial production cost and low replication cost, it leads to pricing schemes that depend intimately on predictions of future demand.
In order to deal with the complexity of the electronic economy, it will be necessary for artificial intelligence and intelligence augmentation to be built into standard e-commerce agents. Selling agents will have to be able to predict the future demand for their products, in order to enforce intelligent pricing strategies. Buying agents will have to be able to intelligently assess the utilities of various products on offer (e.g. various applets for use in software applications). This kind of prediction, because of the complex, chaotic nature of economic dynamics, will require sophisticated statistical pattern recognition techniques, including such tools as neural networks, genetic algorithms and Markov chains. It will also require that complex network dynamics be made accessible to human intuition, by the provision of sophisticated visualization tools, based on a general model of information space such as the agent-data-event model given above.
Glassman, Manasse, Abadi, Gauthier and Sobalvarro (1995). The Millicent Protocol for Inexpensive Electronic Commerce, at http://HTTP.CS.Berkeley.EDU/~gauthier/millicent/millicent.html
Goertzel, Ben (1997). From Complexity to Creativity. New York: Plenum Press.
Lane, David and R. Vescovini (1995). ``When Optimization Isn't Optimal.' Santa Fe Institute Working Paper 95-05-044
Blake LeBaron (1994), Chaos and Nonlinear Forecastability in Economics and Finance, Proceedings of the Royal Society. R.M. Levich and L.R. Thomas (1994). The significance of Technical Trading-Rule Profits in the Foreign Exchange Market: A Bootstrap Approach. Journal of International Money and Finance.
D. G. Saari, (1994) The wavering invisible hand, MIT Press, Cambridge, MA, 1994.
H. Sonnenschein, (1991) Do Walras' identity and continuity characterize the class of community excess demand functions, J. Econom. Theory.
Hal Varian (1995). The Information Economy. Scientific American, Sept. 1995, pp. 200-201
Hal Varian (1996). Differential Pricing and Efficiency, First Monday, v.1 no. 2, http://www.firstmonday.dk
Gerard Weisbuch (1991). Complex Systems Dynamics. New York: Addison-Wesley.
Youssefmir, M. and B. Huberman (1995). Clustered Volatility in Multiagent Dynamics. Santa Fe Institute Working paper #95-05-051.
Zabih, Ramin (1995). Creating an Efficient Market on the World Wide Web, http://www.priceweb.com