<- Back to Datasets

HACS

Human Action Clips and Segments Dataset for Recognition and Temporal Localization

HACS

This project introduces a novel video dataset, named HACS (Human Action Clips and Segments). It consists of two kinds of manual annotations. HACS Clips contains 1.55M 2-second clip annotations; HACS Segments has complete action segments (from action start to end) on 50K videos. The large-scale dataset is effective for pretraining action recognition and localization models, and also serves as a new benchmark for temporal action localization. (*SLAC dataset is now part of HACS dataset.)

View this Dataset
->
MIT
https://www.mit.edu
30000
Items
200
Classes
50000
Labels
Models using this dataset
Last updated on 
January 20, 2022
Licensed under 
Research Only
Label your own datasets on V7
Try our trial or talk to one of our experts.
Start 14 Day Trial
->
Explore Datasets
->