Intelligent systems continue to evolve into behavior models of increasingly complex situations. Now, they involve multiple agents working cooperatively as a team in competition with other teams of multiple agents. As such, they begin to model the kinds of problems people face in the real world in social, economic, and political situations. This paper discusses the work being done on such multiagent systems.
The authors have written this book chapter as a survey, so it doesn’t go into much detail about the systems it covers. It shows that many approaches are being tried, and that there are several definitions of what it means for agents to cooperate. In the systems surveyed, the individual agents usually have a hybrid architecture, mixing reactive behavior with planning in various ways. Planning is aided by the ability to recognize the plans being carried out by the agents of opposing teams. Reactive behavior is improved by various learning methods, particularly reinforcement and case-based learning. The adversarial situations modeled include RoboCup, Quakebot (a single agent facing a team of opposing agents), and business and military confrontations.
I was surprised that the authors said nothing about the applicability of game theory, though reinforcement learning does have some connection to it. However, with 69 references, this paper is a good place to start for someone new to the subject.