The large-scale MEVA dataset is designed for activity detection in multi-camera environments. It was created on the Intelligence Advanced Research Projects Activity (IARPA) Deep Intermodal Video Analytics (DIVA) program to support DIVA performers and the broader research community.
NEWS:
15 November 2021: On-line
interactive visualizations
of MEVA ground truth are available; see here for more
details.
17 September 2021: Ground camera
video to support the HADCV22 Self-Reported Leaderboard Challenge
is available. Please see the README for updated access
details, in particular this note regarding the use of this
data with the ActEV Leaderboard.
20 June 2021: The transcoded data
is available. Please see the README for updated access details,
and the
transcoding FAQ for transcoding details.
MEVA aims to build a corpus of activity video collected from multiple viewpoints in realistic settings.
There is a MEVA data users Google group to facilitate communication and collaboration for those interested in working with the data. Join the conversation through the meva-data-users group.
The dataset is described in our WACV 2021 paper. The bibtex citation is:
@InProceedings{Corona_2021_WACV, author = {Corona, Kellie and Osterdahl, Katie and Collins, Roderic and Hoogs, Anthony}, title = {MEVA: A Large-Scale Multiview, Multimodal Video Dataset for Activity Detection}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2021}, pages = {1060-1068} }
The KF1 data was collected over a total of three weeks at the Muscatatuck Urban Training Center (MUTC) with a team of over 100 actors performing in various scenarios. The fields of view, both overlapping and non-overlapping, capture person and vehicle activities in indoor and outdoor environments. There were multiple realistic scenarios with a variety of scripted and non-scripted activities.
The camera infrastructure included commercial-off-the-shelf EO cameras; thermal infrared cameras as part of several IR-EO pairs; two DJI Inspire 1 v2 drones, and a range of still images from handheld cameras.
The actors were also carrying GPS loggers; see here for more details.
Visualizations of MEVA ground truth are available via our DIVE analytics toolchain. Create an account and click here to view MEVA video and its associated ground truth.
Visualization of the fine-grained MUTC 3D model.
Annotation sample (accelerated for display.)
All MEVA data is available for use under a CC BY-4.0 license; the general MEVA data license is available here.
MEVA video data is hosted on an Amazon Web Services (AWS) S3 bucket; download is provided at no cost via sponsorship through Amazon's AWS Public Dataset Program.
The MEVA data git repository is an evolving collection of metadata and annotations released for the MEVA KF1 data. Highlights include:
There are several annotation efforts:
The MEVA data can be annotated using your preferred annotation toolchain. For annotating and using the MEVA data, the following steps are recommended:
Download and review short clips of visualized annotations for each activity type.
Download exemplarsDownload the current activity definitions. These should guide which activities and objects are annotated.
Download guidelinesGenerate schema like these based on the format described here as part of our annotation git repository.
Contributing your annotations will increase the utility of the MEVA KF1 dataset for everyone. Please clone our annotation git repository and file a merge request to have your annotations pooled back into the master branch.