Generate Ground Truth 10x faster by creating pixel-perfect annotations. Use V7’s intuitive tools to label data and automate your ML pipelines.
The success of your AI projects relies on quality data. Explore the tools at your disposal and learn how they can help you.
Pixel-perfect polygon masks generated via V7's any-object neural network. Click on object parts to include or exclude them, and fine-tune the model once you have enough training data.
Used for finding the contour of an object like a silhouette or defining an amorphous entity like “sky” or “road".
A lightweight vector, but drawn or erased like a painting tool. Handy for creating complex polygons and holes.
Used for 3D and 6 degree-o-freedom object detectors. Cuboids are 3-dimensional bounding boxes that can enclose an item, defining its dimension and pose in a captured scene.
An oval that appears as a two-diameter circle or multi-point polygon on export. Can be used as a round polygon or as an ellipse to mark circular objects that may distort with perspective.
A series of points forming a line that can be used for defining a slope, direction, or edge. This tool comes in handy for lane markings or trajectories.
Bounding boxes are used for training detectors. They’re less precise than polygon masks but may prove cheaper to compute.
Defines a point that may represent an object or marker. Keypoints can be used individually or in groups to form a point map to define the pose of an object.
A network of keypoints connected by vectors. Used for defining the 2D or 3D pose of a multi-limbed object. Keypoint skeletons use a defined, moveable set of points that adapt to the object’s appearance.
Tags describe the whole image for classification purposes. Unlike other annotations, they don’t apply to an image area. They describe features of an image as a whole, or define an image category.
Attributes refer to tags on an annotation. They describe objects in greater detail than the class name itself. They can define discrete or continuous features, such as color or age. Attributes help AI classify objects after detecting them.
Instance IDs (also referred to as “object IDs” or “tracking IDs”) let you re-identify a specific object throughout a dataset. Each annotation is given a numeric ID so the object may be tracked or distinguished from other objects of the same class in an image.
Text can be used to train OCR or to include unique textual information about an annotation. Unlike attributes, the text isn’t saved to be re-applied to other annotations.
Defines a value between 0 and 360 to indicate the 2D pose or direction of an object. These vectors can be used to predict movement or define an object’s tilt.
When your team's needs go beyond the standard tools, you can write plugins to add new functionalities and annotation types, giving you the freedom to expand the platform.
Create and automate data workflows, collaborate in real-time, QA review, version control your datasets, and get full visibility into every aspect of your ML pipeline.
Leverage model-in-the-loop to label your data faster. Use V7’s model tool to invoke your own model or pick one from V7’s public models library.
Build and automate data workflows to streamline your annotation projects. Add multiple review, model, consensus, logic, webhooks, and annotation stages, and assign user roles to efficiently manage your resources.
Streamline the labeling process with orientation markers, reference lines, histograms, color maps, or contrast control.
Use V7’s class-agnostic neural network to label your image datasets in a snap.
Enrich your labeled data with sub-annotations and attributes. Add text, instance ID, directional vectors, and more.
Hire professional image labelers who care about your data. We'll manage the entire labeling project for you.
From JPG to PNG to DICOM to NIfTI—V7 supports the entire gamut of image file formats.
Add consensus stages into your data training workflow and automate the QA process. Compare models or labelers with AI consensus.
Use V7’s powerful analytics and customization to track progress and adapt the platform to your specific labeling needs.
Track time spent, annotations per minute, and accuracy
Manage your labeling project with accurate real-time data
Bundle files into one task, or import DICOM files as hanging protocols
Combine files of different types into one UI to enable multi-modal data labeling
Inspect annotations created manually for training data, or by models in inference
Smoothly navigate files with thousands of labels from your browser
Learn how world-class ML teams build AI products with V7
V7 is super sleek, intuitive, and easy to use. Within a couple of minutes, you're off to the races and can annotate quickly. The team is highly responsive and helpful.
"We use V7 to make our workflow for deep learning training and annotation streamlined and efficient. From the pathologist’s point of view, V7 turned out to be much easier to learn and use than other software - I can easily understand what I’m doing."
We were looking for an annotation tool that would be much faster, and V7 sped up our labeling 9–10x compared to VGG. The appeal of using V7 is that it’s commercial off-the-shelf, very intuitive, and easy to use for non-technical people involved in our project.
"I like the auto-segmentation feature. To me, that’s a nice AI feature that V7 took beyond the gimmick feature - it’s mature enough to be useful."
"We needed a tool that lets us keep the data in one place, annotate, and version it. Having found V7, we decided not to build the internal solution. "
Having accurately annotated datasets was crucial to catch the typical features of malignant melanoma. V7 lets us visualize the balance of this data across populations.
"We needed a tool that could do annotating and data versioning because we distribute our tools to farms, and we need to make sure that they have the same version of data for the same models. V7 met our needs."
"V7 did everything that we wanted—it enabled us to label videos in the way we needed, the turnaround time for new features was really fast, and the reviewing and sorting process was much better."
V7 is helping us manage a complicated, intricate, pixel-perfect labeling exercise. Their model-assisted labeling is the best around.
“Thanks to V7, the image annotation is 30% faster, but realistically, considering the whole process - transferring files and QA - we more than doubled the number of images we can do in the same span of time.”
"Visibility on metrics and annotators' work in V7 is very helpful to us, and it's something we didn’t have in our internal solution. The option to check past annotations and review the work is also valuable, along with V7’s ability to interactively define the workflows and the flexibility in task assignment."
What I appreciate the most about V7 is flexible UI and API, responsive support, and active development of new features and bug fixes.
Managing our data from one place is particularly important for us. Previously, our data was stored in many different formats and in different places. Having a single source makes our data more robust and also greatly reduces the development time for new algorithms, as the learning curve for developers is small.
Discover how other AI-first companies solved knowledge tasks at scale with V7
V7 supports image, video, and text data. The file formats you can use with V7 include: JPG, PNG, MP4, MOV, AVI, BMP, SVS, TIFF, DCM, ZIP, DICOM, NIfTI.
V7 offers three pricing plans: Team, Business, and Pro. The team plan starts at $5,000/year. For detailed pricing and feature overview see the V7 pricing page.
Medical image annotation involves labeling the medical imaging data such as X-Ray, Ultrasound, MRI, CT Scan, etc., for training machine learning models.
Yes. V7 works with a trusted network of partners and professional annotators who will help you turn your data into ground truth. Go to V7 Labeling Services to submit the form and we will send you a proposal within hours.
We offer in-app chat support and email technical support to all of V7 users. We will make sure to take good care of you and your team. You can get in touch with us at: firstname.lastname@example.org.