Pixel Perfect Automated Image Labeling

Create ground truth 10x faster with V7 Darwin

Auto-Annotate

The most accurate pixel-perfect image segmentation labeling tool. Works on any object. Needs no prior training. Entirely neural-network based, no statistical hackery involved. If this sounds like overselling to you:

See it working live ➜

Annotator Metrics

By time, annotations per minute, total images, accuracy, and more. Whether 10 or 1000 annotators, each worker's output is reported in graphs and exportable .CSV.

Annotations per minute (APM)

Review and QA

Approve or reject images and provide feedback to improve the quality of your ground truth. Communicate with your annotators via image comments and real-time notifications.

Pixel Perfect Output

{
     "directional_vector": {
       "angle": 1.57,
       "length": 42.65
     },
     "name": "Aircraft",
     "polygon": {
       "path": [
         {
           "x": 474,
           "y": 343

Sub-pixel perfect in fact. V7 Darwin's .JSON output is a lightweight, easy to read and parse format expandable to multiple annotation types and built for convertibility.

Download a sample output ➜

Sub Annotations

You can add attributes, text, directional vectors, grouping, and more within each region of interest. Sub annotations are like an extra layer of detail to your labels.

Easy In, Easy Out

Seamlessly integrate V7 Darwin's labeling tools into your workflow. Import raw data, version it, and export it to your favourite framework with one line of code.

Learn about Dataset Management ➜

Secure (really)

Annotators have limited viewing rights and cannot download images. All images & video are encrypted on transfer in a GDPR compliant platform. V7 is serious about security.

Download a data security statement ➜

Medically Compliant

Develop FDA and EMA compliant medical applications in a HIPAA compliant environment. Include clinicians and experts as part of your team.

Plug-in an expert workforce ➜

Find Your Ideal Ground Truth

Half of your AI's success stems from the correct annotation of training data. Use one of the tools below to delineate what your AI needs to learn in your images.

automated image annotation

Auto-Annotate

Pixel-perfect polygon masks generated via V7's any-object neural network. Click on object parts to include or exclude them, and fine-tune the model once you have enough training data.

Good for:

- Any visible object, known or unknown.

Bad For

- Non-visual objects, such as a midpoint or curvature.

Allows you to train the following model types:

- Object Detection
- Semantic Segmentation
- Instance Segmentation
- Volumetric Segmentation
- Composite objects

polygon annotation

Polygon

Used when you need to find the contour of an object like a silhouette, or define an amorphous entity like “sky” or “road”.


Good for:

- Any visible object.
- Semantic objects, like floor surfaces

Bad For

- Non-visual objects, such as a midpoint or curvature.

Allows you to train the following model types:

- Object Detection
- Semantic Segmentation
- Instance Segmentation
- Volumetric Segmentation
- Composite objects

Brush and Eraser Tool

Still a lightweight vector, but drawn or erased like a painting tool.

Brush & Eraser are a handy ways of creating complex polygons and holes.

Good for:

- Composite objects, thin objects, or objects with holes.
- Semantic objects, like floor surfaces

Allows you to train the following model types:

- Object Detection
- Semantic Segmentation
- Instance Segmentation

bounding box annotation

Bounding Box

Used to train detectors. Bounding boxes are not as precise as polygon masks, but in some cases cheaper to compute. In Darwin, masks can be converted to bounding boxes but not the other way around.

Good for:

- Uniformly shaped objects
- Objects that don’t overlap
- Low-compute projects

Bad for:

- Elongated objects
- Textures and large background objects (“stuff”)
- Composite objects, or frequently occluded objects.

Bad for:

- Object Detection

Keypoint

Defines a point that may represent an object or marker. Keypoints can be used alone, or in combination to form a point map that defines the pose of an object.

Good for:

- Object markers, such as face points or corners
- Non-visual objects, such as the inner midpoint of an object

Bad for:

- Markers that vary greatly in scale

Allows you to train the following model types:

- Keypoint Detection
- Keypoint Mapping
- 2D and 6DoF Object Pose

Keypoint Skeleton & Custom Polygons

A network of keypoints connected by vectors. Used to define the 2D or 3D pose of a multi-limbed object. Keypoints skeletons have a defined set of points that can be moved to adapt to an object’s appearance.

Good for:

- Human, animal, or object pose
- Objects with a defined number of marker points

Bad for:

- Objects with an undefined number of joints, such as a tree

Allows you to train the following model types:

- Pose estimation

Line

A series of points forming a line. Line, or polyline annotations, may be used to define a slope, direction, or edge. They are often used to define lane markings or trajectories.

Good for:

- Linearly defined objects with no volume, such as edges
- Non-visual objects such as mid-points or trajectories

Bad for:

- Defining objects with volume

Allows you to train the following model types:

- Keypoint Detector
- Regressor networks

cuboid annotation

Cuboid

Used for 3D and 6 degree-o-freedom object detectors. Cuboids are 3-dimensional bounding boxes that can enclose an item, defining its dimension and pose in a captured scene.


- Objects on flat planes that need to be navigated, such as cars
- Objects that require robotic grasping

Bad for:

- Elongated or wound objects
- Textures and large background objects
- Composite and occluded objects
- Amorphous, non-rigid objects such as clothing

Allows you to train the following model types:

- Object Detection
- 3D Cuboid Estimation
- 6DoF Pose Estimation

Classification Tags

Tags describe a whole image for classification purposes. Unlike other annotations, they don’t apply to an area in particular, and should be used to describe features visible throughout an image, or define an image category.

Good for:

- Objects taking up 50% or more than an image
- Image categories, such as “indoor”
- Visual features that are present, but not positional, such as “over-exposed”

Bad for:

- Identifying small objects, or objects in scenes.
- Identifying multiple objects, counting, or any positional logic.

Allows you to train the following model types:

- Image or Video Classifier

Attributes

A tag on an annotation. Attributes describe objects in greater detail than the class name itself. They can define discrete features like color, or continuous ones like age. Attributes help AI classify objects after detecting them.

Good for:

- Any object with relevant variable traits

Allows you to train the following model types:

- Any model that can support an attribute defining head.

Instance Tracking IDs

Instance IDs, also knowns as object or tracking IDs, allow you to re-identify a specific object throughout a dataset. Each annotation is given a numeric ID so that the object may be tracked, or simply distinguished among other objects of the same class in an image. Darwin isolates all annotations in instances by default, however adding an Instance ID allows you to preserve and re-use this ID between images.

Good for:

- Datasets where tracking is important
- Volumetric data

Bad for:

- Semantic segmentation

Allows you to train the following model types:

- Instance Segmentation
- Video Object Tracking
- 3D Volumetric Segmentation

text annotation

Text

Apply free text to an annotation. Text can be used to train OCR, or to include unique textual information about an annotation. Unlike attributes, text isn’t saved to be re-applied to other annotations.

Good for:

- Entering freeform text, such as in clinical notes or transcribed text.

Allows you to train the following model types:

- Optical Character Recognition
- Text detection

directional vector annotation

Directional Vector

Defines a value between 0 and 360 to indicate the 2D pose or direction of an object. These vectors can be used to predict movement, or define an object’s tilt.

Good for:

- Solid objects with a relevant direction or pose

Bad for:

- Amorphous objects such as clothing
- Objects with more than 4 degrees of freedom of movement.

Allows you to train the following model types:

- Object Direction or 2D object pose

bounding box annotation

Plugins

When your team's needs go beyond the standard tools, you can write plugins to add new functionalities and annotation types, giving you the freedom to expand the platform.

Ready to get started?

Schedule a demo with our team or discuss your project.