Traditional logic assumes, sometimes implicitly, that we are absolutely sure about each statement S in the knowledge base (KB); the question is what we can deduce from this knowledge. In practice, we often have some uncertainty about statement, S. A natural way to describe this uncertainty is by assigning, to each statement S from the KB, the (subjective) probability P(S) that this statement is true. Since we are not 100 percent certain about the statements from our KB, we are therefore not 100 percent sure about the consequences, C, either; so, for each possible consequence C, we need to estimate its probability P(C).
While we know the probabilities of different statements from the KB, we usually do not know how correlated these statements are. Because of this, there are two possible ways to estimate P(C). Nilsson’s approach--first described in [1]--is to consider all possible joint probability distributions consistent with the given value P(S), and thus come up with an interval of possible values of P(C). The maximum entropy (MaxEnt) approach--used by J. Pearl in [2]--selects the “most reasonable” joint distribution and computes the probability P(C) based on this selected distribution.
Underlying both approaches is the traditional (monotonic) logic, which is natural since probability theory is based on the usual logic (and related set operations). In practical reasoning, however, we often use nonmonotonic reasoning techniques, such as default reasoning. It is therefore desirable to modify the usual probabilistic approach so as to take this nonmonotonicity into account. For the MaxEnt approach, this was largely done by Pearl, who showed that an appropriate use of probabilistic reasoning can lead, in effect, to nonmonotonic reasoning [2].
The authors describe a similar combination for Nilsson’s approach. In the past, there have been semi-heuristic attempts to incorporate uncertainty into logic programming. Instead of coming up with similar semi-heuristic rules, the authors take into account that the modern mathematical probability theory itself appeared first, in the 1930s, as an axiomatic theory. Following this historical pattern, they formulate reasonable properties of belief revision, and then find formulas that satisfy these properties. The result is not only foundationally interesting, but also a reasonably computationally efficient formalism. The authors illustrate its efficiency on the example of a difficult reasoning problem with a large amount of uncertainty: the problem of figuring out who was responsible for a cyber attack.