A Dataset for Developing and Benchmarking Active Vision
Our dataset aims to allow simulation of robotic motion through an environment for object detection. We collect data in many scenes, which may be one or more rooms in a home or office.Collection Procedure:For most scenes we conduct a full scan, and then move instances around the scene and collect either a full second scan or a partial second scan. A partial scan has much less images and coverage of the scene than a full scan. A number of object instances from the BigBIRD dataset are placed in every scene. The second scan usually has a different subset of BigBIRD instances than the first scan. We then sample 12 images 30 degrees apart at various locations in the scene. We recommend using our visualization code with the example scene to understand the dataset. More detailed info can be found in our paper, and on the Add Data page.Objects: Currently we label 33 unique instances in our scenes. More info about these instances can be found in the instances tab above.Check out our github for code to visualize and load our data.