Back

TextVQA

A dataset to benchmark visual reasoning based on text in images

TextVQA

TextVQA requires models to read and reason about text in images to answer questions about them. Specifically, models need to incorporate a new modality of text present in the images and reason over it to answer TextVQA questions.Statistics 28,408 images from OpenImages 45,336 questions 453,360 ground truth answers

View this Dataset
->
Facebook AI Research
View author website
Task
Visual Question Answering
Annotation Types
Semantic Segmentation
28408
Items
2
Classes
28408
Labels
Models using this dataset
Last updated on 
January 20, 2022
Licensed under 
CC-BY