Modeling human reasoning means dealing with uncertainty. The approach of conventional logic in which rules are absolute cannot be applied. Instead there is uncertainty: we need to use rules but accept that there may be cases where there are exceptions to the rules. We may have rules representing obligations, but these can lead to conflicting outcomes, so a priority needs to be established where a rule is applicable only if it does not conflict with another rule.
This paper is about this sort of reasoning, and it takes the approach of avoiding the use of probability as an aspect of reasoning with uncertainty. It uses three-value Kleene semantics for logic reasoning, in which a proposition may be true, false, or currently undetermined, with an undetermined proposition being able to evolve toward becoming true or false.
A key aspect is that a rule applies only if there are no abnormalities that stop it from applying. So we may start with an assumption that a rule applies, but need to change that if an abnormality is revealed in further reasoning. The logic programming reasoning in the paper uses negation by failure in order to establish a priority order between rules that represent norms, but if both were applied simultaneously it would result in conflicting resolutions. Negation as failure is a feature of logic programming resulting in the inference that something is false if it cannot be proven to be true, although there is the possibility that further information would enable it to be proven true. So here, a lower priority rule applies if the requirements for a higher priority rule to apply cannot currently be proven true.
The paper shows how the logic programming model it uses relates to modeling with an artificial neural network. This is a useful aspect, with the diagrammatic nature of the neural network approach helping to give a picture of how the model works. The paper goes through the logic programming techniques it uses in detail, although it is very abstract and could perhaps have done with some more specific examples. It does, however, conclude with one fairly detailed example.