Competition in a virtual environment will be much more lifelike, challenging, and interesting when autonomous characters adapt to the human participant’s behavior. Most machine learning techniques won’t work well in this domain, however, because they require many iterations to produce substantial improvement. This paper focuses on machine learning techniques that work fast enough to produce noticeable improvements in the skills of autonomous characters within minutes of interaction.
Different techniques are used for different time scales. Case-based planning (prediction) and learning are used to quickly decide the lowest level actions. With less frequency, task selection is done by simulating alternative tasks, selecting the one that appears to produce the best future outcome, and updating its expected value. Mimicking the behavior of the human participant at the action and task levels is another learning technique, which the authors observed was the second most effective technique, ranking after low-level action prediction. More time is taken to select goals; this selection is made by estimating the amount of “happiness” associated with each goal. Learning takes place by adjusting weights in linear perceptrons.
This paper will interest people who model behaviors in autonomous entities, such as those found in video games and other virtual environments, but will also interest researchers in machine learning because of its emphasis on fast learning techniques. The paper summarizes experiments that evaluated the relative contributions of the learning methods studied. Brief mention is also made of how automatic camera and attention control can be achieved.