<- Back to Datasets

ImageNet-A

Natural Adversarial Examples as ImageNet Test set

ImageNet-A

We introduce natural adversarial examples -- real-world, unmodified, and naturally occurring examples that cause classifier accuracy to significantly degrade. We curate 7,500 natural adversarial examples and release them in an ImageNet classifier test set that we call ImageNet-A. This dataset serves as a new way to measure classifier robustness. Like l_p adversarial examples, ImageNet-A examples successfully transfer to unseen or black-box classifiers. For example, on ImageNet-A a DenseNet-121 obtains around 2% accuracy, an accuracy drop of approximately 90%. Recovering this accuracy is not simple because ImageNet-A examples exploit deep flaws in current classifiers including their over-reliance on color, texture, and background cues. We observe that popular training techniques for improving robustness have little effect, but we show that some architectural changes can enhance robustness to natural adversarial examples. Future research is required to enable robust generalization to this hard ImageNet test set.

View this Dataset
->
Task
Image Classification
Annotation Types
Bounding Boxes
Items
Classes
Labels
Models using this dataset
Last updated on 
January 20, 2022
Licensed under 
MIT
Label your own datasets on V7
Try our trial or talk to one of our experts.