<- Back to Datasets

AVAMVG

The AVA Multi-View Gait Dataset.

AVAMVG

he most of multi-view current datasets were recorded under controlled conditions and, in some cases, they made use of a treadmill. As a consequence, most of the current databases are not representative of human gait in a real world. Besides that, there are not many multi-view datasets specifically designed for gait recognition. Some of them are specifically designed for action recognition, and therefore, they do not contain gait sequences of enough length as to contain several gait cycles, since gait is a subset of them.Due to this, we have created a new multi-view database, containing gait sequences of 20 actors that depict ten different trajectories each. The database has been specifically designed in order to test Gait Recognition algorithms based on 3D data. Thus, the cameras have been calibrated and methods based on 3D reconstructions can use this dataset to test. The binary silhouettes of each video sequence are also provided.Using the camera setup described above, twenty humans (4 females and 16 males), participated in ten recording sessions each. Consequently, the database contains 200 multi-view videos or 1200 (6 x 200) single view videos. In the following section we briefly describe the walking activity carried out by each actor of the database.Ten gait sequences were designed before the recording sessions. All actor depict three straight walking sequences (t1,...,t3), and six curved gait sequences (t4,...,t9), as if they had to rounding a corner. The curved paths are composed by a first section in straight line, then a slight turn, and finally a final straight segment. In the last sequence actors describe a figure-eight path (t10).

View this Dataset
->
Task
Human Pose Estimation
Annotation Types
Semantic Segmentation
Items
Classes
Labels
Models using this dataset
Last updated on 
January 31, 2022
Licensed under 
Research Only
Label your own datasets on V7
Try our trial or talk to one of our experts.