Pixel Perfect Automated Image Labeling

Create ground truth 10x faster with V7 Darwin

Auto-Annotate

The most accurate pixel-perfect image segmentation labeling tool. Works on any object. Also works on ultra-high res images. Needs no prior training. Entirely neural-network based, no statistical hackery involved. If this sounds like overselling to you:

Try it out ➜

Annotator Metrics

By time, annotations per minute, total images, accuracy, and more. Whether 10 or 1000 annotators, each worker's output is reported in graphs and exportable .CSV.

Annotations per minute (APM)

Review

Approve or reject images and provide feedback to improve the quality of your ground truth. Communicate with your annotators via image comments and real-time notifications.

Pixel Perfect Output

{
     "directional_vector": {
       "angle": 1.57,
       "length": 42.65
     },
     "name": "Aircraft",
     "polygon": {
       "path": [
         {
           "x": 474,
           "y": 343

Sub-pixel perfect in fact. V7 Darwin's .JSON output is a lightweight, easy to read and parse format expandable to multiple annotation types and built for convertibility.

Download a sample output ➜

Sub Annotations

You can add attributes, text, directional vectors, grouping, and more within each region of interest. Sub annotations are like an extra layer of detail to your labels.

Easy In, Easy Out

Seamlessly integrate V7 Darwin's labeling tools into your workflow. Import raw data, version it, and export it to your favourite framework with one line of code.

Learn about Dataset Management ➜

Secure (really)

Annotators have limited viewing rights and cannot download images. All image data and annotations remains of the uploading team's ownership. V7 is serious about security.

Download a data security statement ➜

Medically Compiant

Develop FDA and EMA compliant medical applications thanks to a detailed record of every annotation by date and authorship. Include clinicians and experts as part of your team.

Find Your Ideal Ground Truth

Half of your AI's success stems from the correct annotation of training data. Use one of the tools below to delineate what your AI needs to learn in your images.

Auto-Annotate

Pixel-perfect polygon masks generated via V7's any-object neural network. Click on object parts to include or exclude them, and fine-tune the model once you have enough training data.

Polygon

Used when you need to find the contour of an object like a silhouette, or define an amorphous entity like “sky” or “road”.


Allows you to train the following model types:

- Object Detection
- Semantic Segmentation
- Instance Segmentation

Bounding Box

Used for detecting objects in space. Bounding-boxes are both faster to annotate and to run than most segmentation-based networks.

Allows you to train the following model types:

- Object Detection

Keypoint

Defines a point, visible or abstract, on an object. Often used to detect parts of an item, defects, or corners. Keypoints are lightweight and quick to annotate.

Allows you to train the following model types:

- Keypoint Detector

Cuboid

A 3D rectangular prism with 6 degrees of freedom. Often used for object pose and robotic grasping. Sometimes known as 3D bounding-box.

Allows you to train the following model types:

- 6DoF Pose Estimator

Tags

Used for training classifiers - the quickest and often most reliable form of image recognition.

Allows you to train the following model types:

- Classifier

Attributes

Like tags, but for individual objects. Mark items as “occluded” or “blurry”, or refine an object’s visible description such as a person being “standing” or a cloud being “nimbus”.

Allows you to train the following model types:

- Classifier

Text

Add text of arbitrary length to this object. Used for detecting characters, words, or paragraphs and storing the contained text as ground truth.

Allows you to train the following model types:

- Text Detector
- Character Recognizer

Directional Vector

Used to define the direction an object is moving in or to learn the orientation of an object, such as a highway from a satellite or an eye’s gaze.


Allows you to train the following model types:

- Detector/Segmenter + Direction

Plugins

When your team's needs go beyond the standard tools, you can write plugins to add new functionalities and annotation types, giving you the freedom to expand the platform.

Ready to get started?

Schedule a demo with our team or discuss your project.