Back

HACS

Human Action Clips and Segments Dataset for Recognition and Temporal Localization

HACS

This project introduces a novel video dataset, named HACS (Human Action Clips and Segments). It consists of two kinds of manual annotations. HACS Clips contains 1.55M 2-second clip annotations; HACS Segments has complete action segments (from action start to end) on 50K videos. The large-scale dataset is effective for pretraining action recognition and localization models, and also serves as a new benchmark for temporal action localization. (*SLAC dataset is now part of HACS dataset.)

Try V7 now
->
MIT
View author website
Task
Event Detection
Annotation Types
Bounding Boxes
30000
Items
200
Classes
50000
Labels
Models using this dataset
Last updated on 
October 31, 2023
Licensed under 
Research Only
Blog
Learn about machine learning and latests advancements in AI.
Read More
Playbooks
Discover how to optimize AI for your business.
Learn more
Case Studies
Discover how V7 empowers AI industry greats.
Explore now
Webinars
Explore AI topics, gain insights, and learn from experts.
Watch now