Top-Down Visual Attention Estimation Using Spatially Localized Activation Based on Linear Separability of Visual Features

Access this Article

Author(s)

    • HIRAYAMA Takatsugu
    • Graduate School of Information Science, Nagoya University|Graduate Program for Real-World Data Circulation Leaders, Nagoya University
    • MASE Kenji
    • Graduate School of Information Science, Nagoya University

Abstract

Intelligent information systems captivate people's attention. Examples of such systems include driving support vehicles capable of sensing driver state and communication robots capable of interacting with humans. Modeling how people search visual information is indispensable for designing these kinds of systems. In this paper, we focus on human visual attention, which is closely related to visual search behavior. We propose a computational model to estimate human visual attention while carrying out a visual target search task. Existing models estimate visual attention using the ratio between a representative value of visual feature of a target stimulus and that of distractors or background. The models, however, can not often achieve a better performance for difficult search tasks that require a sequentially spotlighting process. For such tasks, the linear separability effect of a visual feature distribution should be considered. Hence, we introduce this effect to spatially localized activation. Concretely, our top-down model estimates target-specific visual attention using Fisher's variance ratio between a visual feature distribution of a local region in the field of view and that of a target stimulus. We confirm the effectiveness of our computational model through a visual search experiment.

Journal

  • IEICE Transactions on Information and Systems

    IEICE Transactions on Information and Systems E98.D(12), 2308-2316, 2015

    The Institute of Electronics, Information and Communication Engineers

Codes

Page Top