A visual perception model that predicts visual search time is discussed in this paper. Of specific interest to the authors is a model that can be utilized for both disabled and able-bodied people. The experiment used to gather data for the model’s formulation involves locating a particular icon or object, in a set of icons or objects presented to the study participant on a computer screen. The predictive visual search model can be employed to compare different computer visual interfaces with regard to search time.
The model discussed in this paper divides the computer screen into rectangles, one of which contains the actual target. The probable points of attention are calculated by comparing the other rectangles to the rectangle containing the target. Similarity is measured by decomposing each rectangle into a set of features. Similarities are then calculated using the values of the features. An example model calculation would have been nice to illustrate the methodology of this model.
Biswas and Robinson compare predicted search times with the actual search times for a given vision task. The model’s predictive values can vary from the actual values by some 40 percent, but apparently this is acceptable in this field of endeavor. Correlations of 60 percent are considered significant--such is the world of research in the social sciences where experiments involve human beings.
Some researchers, especially those in the soft sciences, might find this paper of some value.