Authors:
Moreno I. Coco, Frank Keller
Abstract:
The role of task has received special attention in visual cognition research because it can provide causal explanations of goal-directed eye-movement responses. The dependency between visual attention and task suggests that eye-movements can be used to classify the task being performed. A recent study by Greene, Liu, and Wolfe (2012), however, fails to achieve accurate classification of visual tasks based on eye-movement features. In the present study, we hypothesize that tasks can be successfully classified when they differ with the respect to (...)
The role of task has received special attention in visual cognition research because it can provide causal explanations of goal-directed eye-movement responses. The dependency between visual attention and task suggests that eye-movements can be used to classify the task being performed. A recent study by Greene, Liu, and Wolfe (2012), however, fails to achieve accurate classification of visual tasks based on eye-movement features. In the present study, we hypothesize that tasks can be successfully classified when they differ with the respect to the involvement of other cognitive domains such as language processing. We extract the eye-movement features used by Greene et. al., as well as additional features, from the data of three different tasks: visual search, object naming, and scene description. First, we demonstrate that eye-movement responses make it possible to characterize the goals of these tasks. Then, we train three different types of classifiers and predict the task participants performed with an accuracy well above chance (a maximum of 88% for visual search). An analysis of the relative importance of features for classification accuracy reveals that just one feature, i.e., initiation time, is sufficient for above-chance performance (a maximum of 79% accuracy in object naming). Crucially, this feature is independent of task duration, which differs systematically across the three tasks we investigate. Overall, the best task classification performance is obtained with a set of seven features that include both spatial information (e.g., entropy of attention allocation) and temporal components (e.g., total fixation on objects) of the eye-movement record. This result confirms the taskdependent allocation of visual attention, and extends previous work by showing that task classification is possible when tasks differ in the cognitive processes involved (purely visual tasks such as search vs. communicative tasks such as scene description). Keywords: Task classification; eye-movement features; active vision; visual attention; communicative tasks.
(Read More)