Back
New Feature
Update

Hugging Face Integration: Easier Way to Use External Models

Learn how to incorporate Hugging Face models into your V7 data annotation projects. Discover a wide array of models that will help you save costs and build better AI.
Hugging Face Integration: Easier Way to Use External Models
September 20, 2023
5
mins read

The registration of external models has always been a powerful feature of V7. However, today, you can consider that feature to be supercharged. With our latest update, you can now integrate external models available on Hugging Face into your V7 workflows in less than a minute. Just copy and paste an inference endpoint of a model of your choice to use it in V7 Darwin. This is done through the Models tab, and the new functionality automatically maps the Hugging Face payloads into Darwin supported formats, removing any additional effort.

More about Hugging Face

Hugging Face is a platform that hosts a diverse array of machine learning models. This includes a variety of computer vision models, language models, and multimodal solutions. Users can easily navigate through specialized categories such as Object Detection, Image Segmentation, Image-to-Image Transformation, Video Classification, and Zero-Shot Image Classification. These models are designed to cater to a broad spectrum of AI requirements, and accessing them is as straightforward as exploring the Hugging Face repository.

hugging face models

Key benefits of using Hugging Face models

  • Diverse Model Selection. Hugging Face boasts an extensive collection of models. Regardless of the AI task at hand, it’s very likely that they offer the perfect model to fit your needs.
  • Cost-Efficient and Scalable. The platform provides cost-effective AI solutions, eliminating the need to develop models from scratch. As your enterprise evolves, transitioning to more sophisticated, premium options is seamless, ensuring uninterrupted operations.
  • Community-Powered Enhancements. A standout feature of Hugging Face is its vibrant community. This ensures that models are continuously refined and updated, guaranteeing users always have access to the latest advancements in AI.

Below, we’ve included some of the model inference endpoints you can test:

DETR (End-to-End Object Detection) model with ResNet-50

An object detection model using the Transformer architecture with a ResNet-50 backbone. Returns bounding boxes and class labels for detected objects.

Inference endpoint: https://api-inference.huggingface.co/models/facebook/detr-resnet-50

Segformer B2 fine-tuned for clothes segmentation

A segmentation model tailored for identifying clothes in images. Provides polygon segmentation masks indicating different clothing items and body parts.

Inference endpoint: https://api-inference.huggingface.co/models/mattmdjaga/segformer_b2_clothes

ViT for age classification

Applies the Transformer architecture to image patches for age prediction. Outputs tags with predicted age brackets along with confidence scores.

Inference endpoint: https://api-inference.huggingface.co/models/nateraw/vit-age-classifier

object detecion model

How to use Hugging Face models in V7

Recently, we introduced a new, simpler way to register external models via the Bring Your Own Model interface. While custom models may require some additional configuration, for models hosted on Hugging Face the process is even easier still. With the updated V7 Models panel, all it takes is a single click on the Register External Model button and a quick copy and paste of the inference endpoint from the Hugging Face page of the model.

registering hugging face model in v7

Step 1. Set up the inference URL

You can find endpoints for specific models using the Deploy button on Hugging Face. For example, with the DETR + ResNet-50 model by Facebook it is:

https://api-inference.huggingface.co/models/facebook/detr-resnet-50

setting up inference url for hugging face mofel in v7

Step 2. Upload a test image

Feel free to upload an image to test the inference endpoint. If the system detects any labels, they will be conveniently displayed next to the image, along with the model's output in JSON format.

uploading a test image in v7

Step 3. Register classes

You have the option to automatically include classes that were identified by the model. Additionally, if any classes were not detected in the test image example, you can add them manually as well. For example, in our image above the motorcycle class is missing since there happens to be no motorcycles in the image. This means that we need to register this class on our own.

registering classes in v7

Step 4. Connect the model to your V7 workflow

Once the external Hugging Face model has been registered, you will be able to utilize it as a workflow stage or easily access it in the annotation panel using the Auto-Annotate tool.

connecting a model to external workflow

The key distinction between using a model stage and employing Hugging Face models within the annotation panel centers on the scope of analysis. In a model stage, the entire image is processed and analyzed by the model. On the other hand, the Auto-Annotate feature in the annotation panel gives you the flexibility to focus on particular sections of the image, allowing you to choose which areas are analyzed.

Common use cases and additional considerations

Many Hugging Face models cater to traditional computer vision tasks such as classification, object detection, and instance segmentation. These tasks correspond with specific V7 classes, tags, bounding boxes, and polygons, respectively. In most scenarios, the payloads are automatically and accurately interpreted, with results returned alongside their confidence scores.

Common use cases include:

  • Content Moderation. With models specialized in image recognition, businesses can automate the process of moderating user-generated content on their platforms, ensuring that inappropriate or harmful content is swiftly detected and removed.
  • Healthcare Imaging. Medical institutions can leverage models like image segmentation to analyze medical images and digital pathology tissue samples, aiding in the early detection of diseases or abnormalities.
  • Natural Language Processing. For businesses that rely heavily on text, language models from Hugging Face can be used for tasks like sentiment analysis, chatbot interactions, and content summarization.
  • Smart Surveillance. Object detection models can be integrated into surveillance systems to detect unusual activities, unauthorized entries, or even to manage crowd control at public events.
  • Agriculture. Advanced models can analyze satellite images to monitor crop health, predict yields, and even detect early signs of pest infestations.

When working with certain models, you might encounter a 503 status notification. This typically means that the service is unable to process your request, often due to rate limiting. If an excessive number of requests are made in a brief span, you could exceed the API's rate limits, resulting in a 503 response. It's important to review the documentation for individual models to be aware of any rate limits and ensure they aren't breached. For uninterrupted service, consider leveraging production-ready inference endpoints that utilize Hugging Face credits through your account.

If you're an existing user, dive into your V7 dashboard and start experimenting with different inference endpoints. You'll likely discover numerous Hugging Face models that can elevate your projects or simply streamline your work. If you don't have a V7 account yet, sign up and set up your first project today.

Related updates

Introducing Auto-Track: Seamless Object Segmentation in Videos
New feature
Update
Introducing Auto-Track: Seamless Object Segmentation in Videos
Explore V7’s Auto-Track feature for faster video annotation. Use automated instance segmentation and object tracking across frames to reduce delivery times for video labeling projects.
February 22, 2024
10
mins read
See more ->
New Enhancements for Medical Image Segmentation in V7
New feature
Update
New Enhancements for Medical Image Segmentation in V7
Discover how V7's new features for medical image segmentation can transform your radiology projects with advanced 3D visualization, custom layouts, improved segmentation models, and more.
February 22, 2024
10
mins read
See more ->
Ready to get started?
Try our trial or talk to one of our experts.