Semantic Segmentation of Laparoscopic Surgical Images
The dataset contains images from scenes of the laparoscopic surgical procedure and their respective semantic segmentation masks, as PNG and JPG files respectively. We sub-sampled 307 images from Videos 1 and 2 of the M2CAI-tool (A.P. Twinanda, S. Shehata, D. Mutter, J. Marescaux, M. de Mathelin, N. Padoy, EndoNet: A Deep Architecture for Recognition Tasks on Laparoscopic Videos, IEEE Transactions on Medical Imaging (TMI), arXiv preprint, 2017) training set and annotated them at a pixel level into various different categories and sub-categories as defined below: Organs: Liver, Gallbladder, Upperwall, Intestine Instruments: Grasper, Bipolar, Hook, Scissors, Clipper, Irrigator, Specimen Bag, Trocars (Provide an opening to insert the surgical instruments), Clip (The clips applied by the Clipper to seal the blood vessels) Fluids: Bile, Blood Miscellaneous: Unknown (Used as a label for pixels which are indiscernible for the annotator), Black (Used as a label for the surrounding region in the image which is not visible due to the trocar limiting the camera field of view) Artery Altogether, we annotated a total of 5 different organs, 9 different instruments (of which Trocars and Clip are different from the Tool presence annotation in M2CAI-tool dataset), 2 fluids, 2 miscellaneous categories, and the artery. The annotations were done while considering the usage of this information for autonomous robotic surgeries. Further dataset details are described in our paper m2caiSeg: Semantic Segmentation of Laparoscopic Images using Convolutional Neural Networks.