Data labeling
What is Data Labeling and How to Do It Efficiently [Tutorial]
15 min read
—
Jun 18, 2021
The accuracy of your AI model is directly correlated to the quality of data used to train it. Learn why data labeling is an integral part of data preparation workflow and start building reliable AI models.
Hmrishav Bandyopadhyay
Data is the currency of the future.
With technology and AI slowly seeping into our everyday lives, data and its proper use can cause a significant impact in modern society.
Accurately annotated data can be used effectively by ML algorithms to detect problems and propose workable solutions, thus making data annotation an integral part of this change.
Here’s what we’ll cover:
What is data labeling?
Data labeling approaches
Common types of data labeling
How does data labeling work—V7 tutorial
Best practices for data labeling
What is data labeling?
Data labeling refers to the process of adding tags or labels to raw data such as images, videos, text, and audio.
These tags form a representation of what class of objects the data belongs to and helps a machine learning model learn to identify that particular class of objects when encountered in data without a tag.
What is “training data” in machine learning?
Training data refers to data that has been collected to be fed to a machine learning model to help the model learn more about the data.
Training data can be of various forms, including images, voice, text, or features depending on the machine learning model being used and the task at hand to be solved.
It can be annotated or unannotated. When training data is annotated, the corresponding label is referred to as ground truth.
Are you looking for quality datasets to label and train your models? Check out the list of 65+ datasets for machine learning.
“Ground truth” as a term is used for information that is known beforehand to be true.
Unlabeled data vs labeled data
The training dataset is completely dependent on the type of machine learning task we want to focus on. Machine/Deep Learning algorithms can be broadly classified on the type of data they require in three classes.
Supervised learning
Supervised learning, the most common type, is a type of machine learning algorithm that requires data and corresponding annotated labels to train. Popular tasks like image classification and image segmentation come under this paradigm.
The typical training procedure consists of feeding annotated data to the machine to help the model learn, and testing the learned model on unannotated data.
To find the accuracy of such a method, annotated data with hidden labels is typically used in the testing stage of the algorithm. Thus, annotated data is an absolute necessity for training machine learning models in a supervised manner.
Unsupervised learning
In unsupervised learning, unannotated input data is provided and the model trains without any knowledge of the labels that the input data might have.
Common unsupervised algorithms of training include autoencoders that have the outputs the same as the input. Unsupervised learning methods also include clustering algorithms that groups the data into ‘n’ clusters, where ‘n’ is a hyperparameter.
Semi-supervised learning
In semi-supervised learning, a combination of both annotated and unannotated data is used for training the model.
While this reduces the cost of data annotation by using both kinds of data, there are generally a lot of severe assumptions of the training data made while training. Use cases of semi-supervised learning include Protein sequence classification and Internet content analysis.
Dive deeper and check out Supervised vs. Unsupervised Learning: What’s the Difference?
What is ‘Human-in-the-Loop’ (HITL)?
The term Human-In-The-Loop most commonly refers to constant supervision and validation of the AI model's results by a human.
There are two main ways in which humans become part of the Machine Learning loop :
Labeling training data: Human annotators are required to label the training data that is being fed to (supervised/semi-supervised) machine learning models.
Training the model: Data scientists train the model by constantly supervising model details like loss function and predictions. At times model performance and predictions are validated by a human and the results of the validation are fed back to the model.
Data labeling approaches
There are various labeling approaches. depending on the problem statement, the time frame of the project, and the number of people who are associated with the work.
While internal labeling and crowdsourcing are very common, the terminology can also extend to include novel forms of labeling and annotation that make use of AI and active learning for the task.
The most common approaches for annotation of data are listed below.
In-house data labeling
In-house data labeling secures the highest quality labeling possible and is generally done by data scientists and data engineers hired at the organization.
High-quality labeling is crucial for industries like insurance or healthcare, and it often requires consultations with experts in corresponding fields for proper labeling of data.
Check out 21+ Best Healthcare Datasets for Computer Vision if you are looking for medical data.
As is expected for in-house labeling, with the increase in quality of the annotations, the time taken to annotate increases drastically, resulting in the entire data labeling process and cleaning being very slow.
Crowdsourcing
Crowdsourcing refers to the process of obtaining annotated data with the help of a large number of freelancers registered at a crowdsourcing platform.
The datasets annotated consist mostly of trivial data like images of animals, plants, and the natural environment and they do not require additional expertise. Therefore, the task of annotating a simple dataset is often crowdsourced to platforms that have tens of thousands of registered data annotators.
Outsourcing
Outsourcing is a middle ground between crowdsourcing and in-house data labeling where the task of data annotation is outsourced to an organization or an individual.
One of the advantages of outsourcing to individuals is that they can be assessed on the particular topic before the work has been handed over.
This approach of building up annotation datasets is perfect for projects that do not have much funding, yet require a significant quality of data annotation.
V7 works with a highly trained network of professional annotators to help you label your data faster. Learn more about V7 Labeling Services.
Machine-based annotation
One of the most novel forms of annotation is machine-based annotation. Machine-based annotation refers to the use of annotation tools and automation which can drastically increase the speed of data annotating without sacrificing the quality.
The good news is that recent automation developments in traditional machine annotation tools—using unsupervised and semi-supervised machine learning algorithms—helped significantly reduce the workload on the human labelers.
Unsupervised algorithms like clustering and recently developed semi-supervised algorithms for AI data labeling—like active learning are tools that can reduce annotation times by bounds.
Common types of data labeling
From what we have seen till now, data labeling is all about the task we want a machine-learning algorithm to perform with our data.
For example—
If we want a machine learning algorithm for the task of defect inspection, we feed it data such as images of rust or cracks. The corresponding annotation would be polygons for localization of those cracks or corrosion, and tags for naming them.
Here are some common AI domains and their respective data annotation types.
Computer Vision
Computer vision (or the research to help computers “see” the world around them) requires annotated visual data in the form of images. Data annotations in computer vision can be of various types, depending on the visual task that we want the model to perform.
Common data annotation types based on the task are listed below.
Image Classification: Data annotation for image classification entails the addition of a tag to the image being worked on. The number of unique tags in the entire database is the number of classes that the model can classify.
Classification problems can be further divided into:
Binary class classification (which consists of only two tags)
Multiclass classification (which contains multiple tags)
Furthermore, multi-label classification can also be seen, particularly in the case of disease detection, and refers to each image having more than a single tag.
Image Segmentation: In Image Segmentation, the task of the Computer Vision algorithm is to separate objects in the images from their backgrounds and other objects in the same image. This generally means a pixel map of the same size as the image containing 1 where the object is present and 0 where an annotation has yet to be created.
For multiple objects to be segmented in the same image, pixel maps for each object are concatenated channel-wise and used as ground truth for the model.
Object Detection: Object Detection refers to the detection of objects and their locations via computer vision.
The data annotation in object detection is vastly different from that in Image Classification, with each object annotated using bounding boxes. A bounding box is the smallest rectangular segment that contains the object in the image. Bounding box annotations are typically accompanied by tags where each bounding box is given a label in the image.
Generally, the coordinates of these bounding boxes and the corresponding tags for them are stored in a separate JSON file in a dictionary format with the image number/image ID being the key of the dictionary.
Pose estimation: Pose estimation refers to the use of Computer Vision tools to estimate the pose of a person in an image. Pose estimation runs by the detection of key points in the body and correlating these key points for obtaining the pose. The corresponding ground truth for the pose estimation model, thus, would be key points from an image. This would be simple coordinate data that is labeled with the help of tags, where each coordinate gives the location of a particular key point, identified by the tag, in the respective image.
Check out 15+ Top Computer Vision Project Ideas for Beginners to build your own computer vision model in less than an hour.
Natural Language Processing
Natural language processing (or NLP for short) refers to the analysis of human languages and their forms during interaction both with other humans and with machines. Being a part of computational linguistics originally, NLP has developed further with the help of Artificial Intelligence and Deep Learning.
Here are some of the data labeling approaches for labeling NLP data.
Entity annotation and linking: Entity annotation refers to the annotation of entities or particular features in the unlabelled data corpus.
The word ‘Entity’ can take different forms depending on the task at hand.
For the annotation of proper nouns, we have named entity annotation that refers to the identification and tagging of names in the text. For the analysis of phrases, we refer to the process as Keyphrase tagging where keywords or keyphrases from the text are annotated. For analysis and annotation of functional elements of any text like verbs, nouns, prepositions, we use Parts of Speech tagging, abbreviated as POS tagging.
POS tagging is used in parsing, machine translation, and generation of linguistic data.
Entity annotation is followed by entity linking, where the annotated entities are linked to data repositories around them to assign a unique identity to each of these entities. This is particularly important when the text contains data that can be ambiguous and needs to be disambiguate.
Entity linking is often used for semantic annotation, where the semantic information of entities is added as annotations.
Text classification: Similar to image classification where we assign a label to image data, in text classification, we assign one or multiple labels to blocks of text.
While in entity annotation and linking, we separate out entities inside each line of the text, in text classification, the text is considered as a whole and a set of tags is assigned to it. Types of text classification include a classification on the basis of sentiment (for sentiment analysis) and classification on the basis of the topic the text wants to convey (for topic categorization).
Phonetic annotation: Phonetic annotation refers to the labeling of commas and semicolons present in the text and is particularly necessary in chatbots that generate textual information based on the input provided to them. Commas and stops at unintended places can change the structuring of the sentence, adding to the importance of this step.
Audio annotation
Audio annotation is necessary for the proper use of audio data in machine learning tasks like speaker identification and extraction of linguistic tags based on the audio information. While speaker identification is the simple addition of a label or a tag to an audio file, annotation linguistic data consists of a more complex procedure.
For the annotation of linguistic data, the first annotation of the linguistic region is carried out as no audio is expected to contain 100 percent speech. Surrounding sounds are tagged and a transcript of the speech is created for further processing with the help of NLP algorithms.
How does data labeling work
Data labeling processes work in the following chronological order:
Data collection: Raw data is collected that would be used to train the model. This data is cleaned and processed to form a database that can be fed directly to the model.
Data tagging: Various data labeling approaches are used to tag the data and associate it with meaningful context that the machine can use as ground truth.
Quality assurance: The quality of data annotations is often determined by how precise the tags are for a particular data point and how accurate the coordinate points are for bounding box and keypoint annotations. QA algorithms like the Consensus algorithm and Cronbach’s alpha test are very useful for determining the average accuracy of these annotations
Check out 20+ Open Source Computer Vision Datasets to find more quality data.
Labeling data with V7
V7 provides us a vast array of tools that are quintessential for data annotation and tagging, therefore allowing us to perform accurate annotations for segmentation, classification, object detection, or pose estimation at lightning-fast speeds.
V7 further allows you to train your models on the web itself, making the whole process of building an AI model fast and easy.
Add your data in the next section and add the classes you would want to tag along with the type of annotation it needs.
Forgot to add a class you need?
Don’t worry—you can always add them later!
Annotate: V7 labs offers a plethora of data labeling tools for to help annotate your machine learning data and complete your data labeling tasks.
Let us take a look at the bounding boxes tool and the auto-annotate tools on some data we uploaded.
Bounding box tool
The bounding box tool is used to help us fit bounding boxes onto objects and tag them correspondingly.
Here is an example of its use:
Read more: Annotating With Bounding Boxes: Quality Best Practices
Auto annotate tool
The auto annotate tool is a specialized feature of V7 that sets it apart from other annotators. It can automatically capture fine-grained segmentation maps from images, making it one of the most useful tools for segmentation ground maps.
An example of the powerful auto-annotate tool can be seen here:
Train your model: Create your Neural Network and name it correspondingly. Train your model on the annotated data you generated.
Review and correct your annotations: Issues with your model performance or bad predictions? Review your annotations to make sure you didn’t miss out on anything in your training dataset! You can always come back to re-annotate and tag data samples correctly.
Re-train your model: Retrain your model on the newly annotated data.
Export your files: Export your data annotations easily with the help of the export button at the top:
Best practices for data labeling
With supervised learning being the most common form of machine learning today, data labeling finds itself in almost every workplace that talks about AI.
Here are some of the best practices for data labeling for AI to make sure your model isn’t crumbling due to poor data:
Proper dataset collection and cleaning: While talking about ML, one of the primary things we should take care of is the data. The data should be diversified but extremely specific to the problem statement. Diverse data allows us to infer ML models in multiple real-world scenarios while maintaining specificity reduces the chances of errors. Similarly, appropriate bias checks prevent the model from overfitting to a particular scenario.
Proper annotation approach: The next most important thing for data labeling is the assignment of the labeling task. The data to be annotated has to be labeled via in-house labeling, outsourcing, or via crowdsourcing means. The proper choice of data labeling approach undertaken helps keep the budget in check without cutting down the annotation accuracy.
QA checks: Quality Assurance checks are absolutely mandatory for data that has been labeled via crowdsourcing or outsourcing means. QA checks prevent false labels and improperly labeled data from being fed to ML algorithms. Improper and imprecise annotation can easily act as noise and completely ruin an otherwise dependable ML model.
Data labeling: TL;DR
We talked about the forms of data annotation, common data annotation approaches, and some best practices for annotation.
Here's a short summary of key points we've covered.
Almost all AI algorithms work on the assumption that the ground truth data they are being provided with is completely accurate. Inaccuracies in data annotation by humans often result in these models not able to perform at their best, bringing down the overall accuracy of prediction.
Data labeling and annotation, thus, forms one of the biggest challenges faced by AI today, hindering large-scale AI integration in industries. Accurate and careful annotation of data that can bring out the best in any ML model is always in high demand and is a fundamental part of any successful ML project.
Read more:
Optical Character Recognition: What is It and How Does it Work [Guide]
The Complete Guide to CVAT—Pros & Cons
YOLO: Real-Time Object Detection Explained
The Ultimate Guide to Semi-Supervised Learning
The Beginner’s Guide to Contrastive Learning
9 Reinforcement Learning Real-Life Applications
Mean Average Precision (mAP) Explained: Everything You Need to Know
Hmrishav Bandyopadhyay studies Electronics and Telecommunication Engineering at Jadavpur University. He previously worked as a researcher at the University of California, Irvine, and Carnegie Mellon Univeristy. His deep learning research revolves around unsupervised image de-warping and segmentation.