Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Markov chain aggregation for agent-based models
Banisch S., Springer International Publishing, New York, NY, 2015. 195 pp. Type: Book (978-3-319248-75-2)
Date Reviewed: Jul 12 2017

Most techniques for modeling dynamic systems fall into one of two categories. Equation-based models such as system dynamics and other differential equation formalisms seek a closed-form expression for the overall dynamics, but typically characterize the system by measures at the macro level (such as averages over entities), and often miss local interactions among entities [1]. Entity-based or agent-based models capture these local interactions, but the emerging macro-level behaviors are difficult to generalize without a closed-form description. Perhaps the greatest challenge in modeling is how to combine the insight of the macro-focused equation-based approach with the accuracy of the micro-focused entity-based approach.

This volume, a revision of a dissertation in mathematical physics at the University of Bielefeld, Germany, offers a major contribution to this problem. The core idea of the book is drawn from the concept of lumpability in Markov theory, for which Banisch relies heavily on Kemeny and Snell [2], and in particular on Theorem 6.3.2 in that work. A system, described at the micro level, can be in any one of a number of states. If transition probabilities can be defined among those states, the micro system can be viewed as a Markov chain, but this description is difficult to apply to an agent-based system because the state space is huge, leading to unmanageably large transition matrices. We would like to sort those micro-level states into a much smaller number of partitions (mutually exclusive, collectively exhaustive groups). Compare the transition probabilities from the micro states in one partition to those in another. If these probabilities are the same for any pair of micro states in a given pair of partitions, then we can view the partitions as macro states that themselves form a Markov chain, and the system is said to be lumpable. This higher-level Markov chain offers a macro-level description of the micro system, to which the rich toolbox of methods from Markov analysis can be applied.

Banisch discusses the conditions under which the very large but well-defined state space of a multiagent system is lumpable. He provides ways to approximate lumpability, as well as extensions and applications of the toolbox that he develops. Chapters 1 through 3 provide the necessary background in agents and Markov chains.

The book uses a simple voter model (chapter 4) as the example agent-based system. Each agent holds one of a finite set of opinions (in the simplest case, {0, 1}) and has some subset of the other agents as neighbors. At each step, we pick one agent i, then one of its neighbors j, and change the opinion of agent i to that of agent j. The micro state of the system is a vector of opinions where the index of each entry identifies an agent.

In the simplest case, known as homogeneous mixing, every agent is a neighbor to very other agent. In this case, any micro state with the same number of agents holding opinion 0 has the same transition probability as any micro state with a different number of 0 agents, so the system is lumpable. This particular system always collapses into consensus, but analyzing the macro system as a Markov chain allows study of characteristics such as transient behavior.

Most agent-based systems are not homogeneous, but symmetries in the network of agent interactions can still yield a lumpable system (chapter 5). In addition, the simple voting model is an extremely simple agent, but Banisch shows in chapter 6 how the methods can be extended to the contrarian voter model, in which the first agent in an interacting pair adopts the opinion of its partner only with probability 1 - p, and adopts the opposite opinion with probability p.

Even with such simple systems, lumpability is far from common. The next extension (chapter 7) is to show a correlation between lumpability and information-theoretic measures for multilevel systems. A level is said to be informationally closed if knowledge of the micro states does not allow for better predictions of the next macro state than knowledge of the previous macro state alone. In this case, when the mutual information between levels conditioned on the previous macro state is 0, the system is lumpable. However, while lumpability is binary, conditional mutual information is not, and this provides a way to characterize how close a system is to being lumpable. This extension allows discussion of the macro dynamics of the contrarian voter model on a wide range of networks, including those that do not offer the symmetry needed for full lumpability.

Chapter 8 illustrates the broader applicability of the book’s methods by elucidating the conundrum in evolutionary theory between adaptation (where a whole population shifts its characteristics to become more viable) and speciation (where the population divides into subgroups that focus on different parts of the fitness landscape). Chapter 9 generalizes still further by discussing the relation of lumpability to a formal definition of emergent behavior, a current question in the study of complex systems.

This volume offers two values to the research community in agent-based modeling and complex adaptive systems. First, its results are interesting and powerful, and deserve to be applied widely. Second, it should be imitated as a beautiful example of a research program that starts by applying a focused concept to a very simple model, generalizes the resulting insights to more complex models, and then sketches the potential applications far beyond the scope of the original work. It deserves wide and careful attention.

Reviewer:  H. Van Dyke Parunak Review #: CR145417 (1709-0587)
1) Shnerb, N. M.; Louzoun, J.; Bettelheim, E.; Solomon, S. The importance of being discrete: life always wins on the surface. Proc. National Academy of Sciences 97, 19(2000), 10322–10324.
2) Kemeny, J. G.; Snell, J. L. Finite Markov chains. Van Nostrand, Princeton, NJ, 1960.
Bookmark and Share
  Featured Reviewer  
 
Markov Processes (G.3 ... )
 
 
Multiagent Systems (I.2.11 ... )
 
 
Distributed Artificial Intelligence (I.2.11 )
 
 
Model Development (I.6.5 )
 
Would you recommend this review?
yes
no
Other reviews under "Markov Processes": Date
Continuous-time Markov chains and applications
Yin G., Zhang Q., Springer-Verlag New York, Inc., New York, NY, 1998. Type: Book (9780387982441)
Jan 1 1999
Stochastic dynamic programming and the control of queueing systems
Sennott L., Wiley-Interscience, New York, NY, 1999. Type: Book (9780471161202)
Jan 1 1999
Lower bounds for randomized mutual exclusion
Kushilevitz E., Mansour Y., Rabin M., Zuckerman D. SIAM Journal on Computing 27(6): 1550-1563, 1998. Type: Article
Jul 1 1999
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy