This research project tries to answer two questions: (1) Can a user’s current visualization task properties, performance, and long-term cognitive abilities be inferred solely based on eye gaze data? (2) Which eye gaze features would be the most informative? The long-term goal of this research is to develop data visualizations that are adaptive to different users and tasks.
The paper describes a user study in which 35 subjects performed five types of tasks on two different types of data visualizations: a bar graph and a radar graph. Each data visualization is divided into five areas of interest (AOIs): high area, low area, labels, question text, and legend. The authors collected eye gaze data such as number of fixations, fixation rate, fixation duration, saccade length, and saccade angles. They have developed a toolkit to convert the raw eye gaze data into more useful statistics, particularly with respect to specific AOIs. The authors applied various classifiers to the statistical data to predict task type, task complexity, task difficulty, user performance, and cognitive abilities.
Regarding the first research question (1), the authors found that a user’s eye gaze data alone can predict visualization task type, task complexity, difficulty, user performance, and user cognitive abilities with accuracy ranging from 40 to 80 percent. However, it is less effective in predicting user expertise. Some experiments also show that the accuracy of prediction is highest using the eye gaze data collected at the beginning of each task. In addition, logistic regression consistently outperforms other machine learning models in this study. Regarding the second research question (2), the authors found that AOI-related features were crucial for more accurate predictions.
This research provides some evidence that eye gaze data can be used to help develop adaptive data visualization. However, eye gaze data may need to be integrated with other information to improve the accuracy of predictions.