We are excited to introduce the updated V7 Auto-Annotate that now includes the powerful Segment Anything Model (SAM) developed by Meta. It is a great all-purpose tool for automatic zero-shot segmentation tasks.
Pick, combine, and merge pre-segmented sections of images
You can pre-process the whole image and create semantic masks based on the SAM engine. This feature gives you better control over your annotations and is more accurate.
It can detect and segment objects within a specified bounding box area
The previous version of the Auto-Annotate allowed you to automatically segment objects within a specified area.
The SAM is now the default option available in V7, but you can switch between the SAM and the traditional Auto-Annotate tool. Both can be used together for different scenarios. It is possible to use the tools interchangeably, as both produce polygon annotations as the final output.
The updated Auto-Annotate tool, powered by SAM, provides numerous benefits for your training data workflows and AI product development:
SAM is now the primary engine for our updated Auto-Annotate tool. Our goal was to enhance your experience with dataset labeling. SAM offers advanced segmentation capabilities and intuitive mechanics for annotation tasks.
To leverage all of these powerful features in V7, there are a few simple steps to follow.
Pick a file in your dataset and go to the annotation panel. Click the Auto-Annotate button (or press N on your keyboard) to pre-process your image with SAM. The short animation with dots should appear. After the animation is over, you will be able to pick one of the highlighted objects in the image.
Once an area of the image has been highlighted with SAM, save the annotation by clicking the Save button or by pressing Enter. Be sure to select the class you want to map the SAM annotations onto.
For instance, to select apples, simply pick an apple, create the "Apple" class, press Enter, choose another apple, press Enter, and so on. Notice that you can work with only one class at a time. When selecting a banana, remember to change the annotation class to "Banana.”
For complex objects, adjust the shape of your annotations by adding positive and negative points. The model will attempt to predict the correct area of the annotation based on the points you include or exclude.
Positive points are marked in blue while negative points are red. Adding more points makes it easier for the model to predict the correct shape of the annotation, even when it consists of multiple segmentation masks that are separated.
Once the polygon mask is saved you can modify it with other tools from the panel. For example, it is possible to add or erase parts of your new annotation with the Brush tool.
SAM is a versatile model for all sorts of segmentation tasks. Here are some more examples to help you understand the practical applications of the upgraded Auto-Annotate tool.
As the name suggests, SAM can auto-segment anything (for example a fruit stand) with a single click. After the animation is complete, you will see that the image is now pre-segmented with different fruits being highlighted.
Now, all you have to do is select a fruit, pick the right class, and press Enter. The SAM engine will automatically create a polygon mask around the selected fruit, significantly speeding up the annotation process.
You can choose the level of detail you want in your annotations. The SAM engine is very good at predicting whether you are interested in the whole object or just a specific part of it.
For example, if you click on the pizza, the entire pizza, including the ingredients, will be selected. However, if you want to annotate specific ingredients, like basil or tomatoes, simply click on them. Only that part of the pizza will be highlighted. This gives you granular control over your annotations, allowing you to annotate complex scenes with ease.
Let’s consider a photo of a factory line with cans of soda being manufactured. Some cans are fully visible, while others are partially obscured by machinery or other cans. With the traditional Auto-Annotate tool, labeling such images could be challenging. However, with the SAM integration, this task becomes much easier.
The updated auto-annotation engine will pre-segment the image, highlighting each can individually (even those that are partially obscured). When you select a can and save the segmentation mask, the engine will create a multi-polygon annotation that accurately represents the visible parts of the can.
While the integration of the Segment Anything Model (SAM) represents a significant upgrade for the V7 Auto-Annotate tool, it's important to acknowledge some limitations:
Despite these considerations, the integration of SAM into the V7 Auto-Annotate tool brings several key improvements and will play a significant role in our future updates:
As we continue to enhance and refine our tools, we hope to address current limitations while introducing new features that will further improve your ML workflows.
Read more: Segment Anything Model (SAM): Documentation