<- Back to Datasets

CapGaze

Human attention in image captioning

CapGaze

There are two parts of the dataset: capgaze1: contains 1000 images, and raw data (eye-fixations and verbal description) from 5 native English speakers. This part of data was used for the analysis. For data privacy reason, the voice of the verbal description was converted by a masking process (pitch modulation, the content was preserved). capgaze2: contains 3000 images, and processed data (we combined all the eye-fixations from different people for each image into a fixation map). This part of data was used for developing saliency prediction model under the image captioning task.

View this Dataset
->
Task
Visual Attention
Annotation Types
Bounding Boxes
4000
Items
Classes
4000
Labels
Models using this dataset
Last updated on 
January 20, 2022
Licensed under 
Unknown
Label your own datasets on V7
Try our trial or talk to one of our experts.