Top-down Visual Saliency Guided by Captions

Abstract

Neural image/video captioning models can generate accurate descriptions, but their internal process of mapping regions to words is a black box and therefore difficult to explain. Top-down neural saliency methods can find important regions given a high-level semantic task such as object classification, but cannot use a natural language sentence as the top-down input for the task. In this paper, we propose Caption-Guided Visual Saliency to expose the region-to-word mapping in modern encoder-decoder networks and demonstrate that it is learned implicitly from caption training data, without any pixel-level annotations. Our approach can produce spatial or spatio-temporal heatmaps for both predicted captions, and for arbitrary query sentences. It recovers saliency without the overhead of introducing explicit attention layers, and can be used to analyze a variety of existing model architectures and improve their design. Evaluation on large-scale video and image datasets demonstrates that our approach achieves comparable captioning performance with existing methods while providing more accurate saliency heatmaps.

PDF

Overview


Code

The code to prepare data and train the model can be found in:
https://github.com/VisionLearningGroup/caption-guided-saliency


Reference

If you find this useful in your work please consider citing:

  
	  @InProceedings{Ramanishka_2017_CVPR,
          author = {Ramanishka, Vasili and Das, Abir and Zhang, Jianming and Saenko, Kate},
          title = {Top-Down Visual Saliency Guided by Captions},
          booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
          month = {July},
          year = {2017}
          }