The registration of external models has always been a powerful feature of V7. However, today, you can consider that feature to be supercharged. With our latest update, you can now integrate external models available on Hugging Face into your V7 workflows in less than a minute. Just copy and paste an inference endpoint of a model of your choice to use it in V7 Darwin. This is done through the Models tab, and the new functionality automatically maps the Hugging Face payloads into Darwin supported formats, removing any additional effort.
Hugging Face is a platform that hosts a diverse array of machine learning models. This includes a variety of computer vision models, language models, and multimodal solutions. Users can easily navigate through specialized categories such as Object Detection, Image Segmentation, Image-to-Image Transformation, Video Classification, and Zero-Shot Image Classification. These models are designed to cater to a broad spectrum of AI requirements, and accessing them is as straightforward as exploring the Hugging Face repository.
Below, we’ve included some of the model inference endpoints you can test:
An object detection model using the Transformer architecture with a ResNet-50 backbone. Returns bounding boxes and class labels for detected objects.
Inference endpoint: https://api-inference.huggingface.co/models/facebook/detr-resnet-50
A segmentation model tailored for identifying clothes in images. Provides polygon segmentation masks indicating different clothing items and body parts.
Inference endpoint: https://api-inference.huggingface.co/models/mattmdjaga/segformer_b2_clothes
Applies the Transformer architecture to image patches for age prediction. Outputs tags with predicted age brackets along with confidence scores.
Inference endpoint: https://api-inference.huggingface.co/models/nateraw/vit-age-classifier
Recently, we introduced a new, simpler way to register external models via the Bring Your Own Model interface. While custom models may require some additional configuration, for models hosted on Hugging Face the process is even easier still. With the updated V7 Models panel, all it takes is a single click on the Register External Model button and a quick copy and paste of the inference endpoint from the Hugging Face page of the model.
You can find endpoints for specific models using the Deploy button on Hugging Face. For example, with the DETR + ResNet-50 model by Facebook it is:
Feel free to upload an image to test the inference endpoint. If the system detects any labels, they will be conveniently displayed next to the image, along with the model's output in JSON format.
You have the option to automatically include classes that were identified by the model. Additionally, if any classes were not detected in the test image example, you can add them manually as well. For example, in our image above the motorcycle class is missing since there happens to be no motorcycles in the image. This means that we need to register this class on our own.
Once the external Hugging Face model has been registered, you will be able to utilize it as a workflow stage or easily access it in the annotation panel using the Auto-Annotate tool.
The key distinction between using a model stage and employing Hugging Face models within the annotation panel centers on the scope of analysis. In a model stage, the entire image is processed and analyzed by the model. On the other hand, the Auto-Annotate feature in the annotation panel gives you the flexibility to focus on particular sections of the image, allowing you to choose which areas are analyzed.
Many Hugging Face models cater to traditional computer vision tasks such as classification, object detection, and instance segmentation. These tasks correspond with specific V7 classes, tags, bounding boxes, and polygons, respectively. In most scenarios, the payloads are automatically and accurately interpreted, with results returned alongside their confidence scores.
Common use cases include:
When working with certain models, you might encounter a 503 status notification. This typically means that the service is unable to process your request, often due to rate limiting. If an excessive number of requests are made in a brief span, you could exceed the API's rate limits, resulting in a 503 response. It's important to review the documentation for individual models to be aware of any rate limits and ensure they aren't breached. For uninterrupted service, consider leveraging production-ready inference endpoints that utilize Hugging Face credits through your account.
If you're an existing user, dive into your V7 dashboard and start experimenting with different inference endpoints. You'll likely discover numerous Hugging Face models that can elevate your projects or simply streamline your work. If you don't have a V7 account yet, sign up and set up your first project today.