This research paper compares the reasoning used by man to judge transparency with that of a machine programmed to estimate true transparency. Two types of transparency, additive and subtractive color mixture, are considered. The first is caused when either a mesh, whose detail cannot be resolved, or a fast-moving object blurs and partially obscures the target. Beck develops earlier work by Metelli [1] using an episcotister, a device which revolves, partially obscuring the target object. Beck’s model establishes boundary constraints which must not be violated for true transparency. The second is caused when a transparent filter obscures an object. Here, an alternative model’s boundary constraints are related to the first case.
Beck demonstrates by example that the human visual system is not sensitive to certain violations of these constraints. He suggests a partial explanation, in terms of a revision of Metelli’s work, to take account of lightness rather than reflectance. He further considers the degree of transparency perceived, the effect of figural clues, such as shape, and covers a number of cases where humans inaccurately judge transparency.
This paper is not very readable, in particular when it is defining terms, such as the causes of perception of transparency. However, the author does introduce some important ideas. When applied, they will help designers identify areas of potential misunderstanding, such as when shape cues mislead a human into the belief that what is in fact a separate object behind a transparent object is part of an opaque object. This paper is worth study by any researcher working on the generation or evaluation of computer-generated scenes containing transparent objects.