The main point of the paper is a comparison of Dempster-Shafer composition and what the author calls convex Bayesian conditionalization. Let P be a given set of probabilities, P′ an update. A conditional probability is P ( . | e ) = P ( . ∧ e ) &slash; P ( e ). Assume that we give P, P′ relative weights (importance) W : 1 . Then with &agr; = b &slash; ( b + 1 ) , the convex Bayesian update of P ( s | e ) is This gives a confidence interval for unknown b. The author then proves that Dempster-Shafer updating gives tighter bounds than convex Bayesian updating and questions whether this really is an advantage. Dempster updating is mathematically justified only if the probabilities to be combined derive from independent data. This essential condition is missing in Shafer’s presentation. Independence is such a strong condition that tight bounds in that case are well justified. If the updating material is not independent of the data already used, Dempster composition is illegitimate. The author’s convex composition needs probabilistic justification.