<- Back to Datasets

IconQA

A New Benchmark for Abstract Diagram Understanding and Visual Language Reasoning

IconQA

Current visual question answering (VQA) tasks mainly consider answering human-annotated questions for natural images in the daily-life context. In this work, we propose a new challenging benchmark, icon question answering (IconQA), which aims to highlight the importance of abstract diagram understanding and comprehensive cognitive reasoning in real-world diagram word problems. For this benchmark, we build up a large-scale IconQA dataset that consists of three sub-tasks: multi-image-choice, multi-text-choice, and filling-in-the-blank. Compared to existing VQA benchmarks, IconQA requires not only perception skills like object recognition and text understanding, but also diverse cognitive reasoning skills, such as geometric reasoning, commonsense reasoning, and arithmetic reasoning.

View this Dataset
->
Center for Vision, Cognition, Learning and Autonomy, UCLA
http://vcla.stat.ucla.edu
57672
Items
377
Classes
57672
Labels
Models using this dataset
Last updated on 
January 20, 2022
Licensed under 
CC-BY-NC-SA
Label your own datasets on V7
Try our trial or talk to one of our experts.
Start 14 Day Trial
->
Explore Datasets
->