Back

MMAct

A Large-Scale Dataset for Cross Modal Learning on Human Action Understanding

MMAct

MMAct is a new large-scale multi modal dataset for action understanding with the largest modalities 7 MODALITIES: RGB, Keypoints, Acceleration, Gyroscope, Orientation, Wi-Fi, Pressure. 1900+ VIDEOS: Untrimmed videos with 1920x1080@30FPS 36K INSTANCES: Average length is in a range from 3-8 seconds. 37 CLASSES: Daily, Abnormal, Desk work actions 4 SCENES: Free space, Occlusion, Station Entrance, Desk work. 4 + 1 VIEWS: 4 survillence views + 1 egocentric view 20 SUBJECTS: 10 female, 10 male RANDOMNESS: Deployed under a semi-naturalistic collection protocol.

Try V7 now
->
MMAct
View author website
Task
Event Detection
Annotation Types
Bounding Boxes
1900
Items
37
Classes
1900
Labels
Models using this dataset
Last updated on 
October 31, 2023
Licensed under 
Research Only
Blog
Learn about machine learning and latests advancements in AI.
Read More
Playbooks
Discover how to optimize AI for your business.
Learn more
Case Studies
Discover how V7 empowers AI industry greats.
Explore now
Webinars
Explore AI topics, gain insights, and learn from experts.
Watch now