Representation of and reasoning on nonmonotonic, defeasible knowledge are important. This subject has a wide body of literature; for example, Pearl [1] and Geffner [2] give rankings of models and relative defaults (higher ranked models stand for more surprising situations) derived from qualitative abstractions of an agent’s experience.
This paper is based on the author’s philosophical investigations from the early 1970s, and summarizes the refinements of this theory as he has applied it to AI. This application serves as a constructive background for the OSCAR computer system, developed to demonstrate these ideas.
Pollock categorizes the defeaters of reasons, and treats the complicated cases of collective and provisional defeat. He gives the criteria of adequacy for a defeasible reasoner. He then describes the interest-driven and interrupt-driven defeasible reasoners. Finally, interesting examples taken from OSCAR show retracting and reinstating conclusions.