Back

UTD-MHAD

Action recognition dataset

UTD-MHAD

The UTD-MHAD dataset consists of 27 different actions performed by 8 subjects. Each subject repeated the action for 4 times, resulting in 861 action sequences in total. The RGB, depth, skeleton and the inertial sensor signals were recorded.

Try V7 now
->
Signal and Image Processing (SIP) Lab & Embedded Systems and Signal Processing (ESSP) Lab
View author website
Task
Event Detection
Annotation Types
Bounding Boxes
Items
Classes
Labels
Models using this dataset
Last updated on 
October 31, 2023
Licensed under 
Unknown
Blog
Learn about machine learning and latests advancements in AI.
Read More
Playbooks
Discover how to optimize AI for your business.
Learn more
Case Studies
Discover how V7 empowers AI industry greats.
Explore now
Webinars
Explore AI topics, gain insights, and learn from experts.
Watch now