Synthetic and Real-world Datasets of Transparent Objects for Robotics
Transparent objects are a common part of everyday life, yet they possess unique visual properties that make them incredibly difficult for standard 3D sensors to produce accurate depth estimates for. In many cases, they often appear as noisy or distorted approximations of the surfaces that lie behind them. In an effort to foster more research in data-driven techniques that can address these issues, we present several large-scale synthetic and real-world 3D datasets with ground truth 3D data of transparent objects.The ClearGrasp dataset contains more than 50,000 photorealistic renders of transparent objects with corresponding surface normals, segmentation masks, edges, and depth. The dataset also contains 286 real-world images with corresponding ground truth depth, taken under a number of different indoor lighting conditions, various cloth and veneer backgrounds with random opaque objects scattered around the scene.The Real-World Transparent Objects Dataset contains 15 transparent objects in 5 classes. The dataset has 48,000 real-world images at 720p resolution of single objects taken with stereo and depth cameras on a variety of backgrounds, some quite challenging. The depth images are for both the original transparent object and an opaque twin placed at exactly the same pose, to facilitate running algorithms that require depth input. All RGB and depth images are rectified and registered to each other. Segmentation and edge masks are provided. All objects in the images are labeled with 3D keypoints in the camera and world frames, with 3D RMSE errors of 3.4 mm.