The Multiview Extended Video with Activities (MEVA) dataset

The large-scale MEVA dataset is designed for activity detection in multi-camera environments. It was created on the Intelligence Advanced Research Projects Activity (IARPA) Deep Intermodal Video Analytics (DIVA) program to support DIVA performers and the broader research community.

29 April 2020: We've updated the video data to remove a block of 130 corrupted videos for the G328 camera, and to rotate all 83 instances of the G639 camera. See Section 3.1 of the MEVADATA readme for more details.
19 March 2020: We've released GPS data associated with the actors in the released video. See here for details.
13 December 2019: We're pleased to announce annotations for an addition 6 hours of MEVA data, resulting in 22 hours of annotated data. Annotations are available via the git repository.

About MEVA data

MEVA aims to build a corpus of activity video collected from multiple viewpoints in realistic settings.

There is a MEVA data users Google group to facilitate communication and collaboration for those interested in working with the data. Join the conversation through the meva-data-users group.


Known Facility Release #1 ("KF1"):

The KF1 data was collected over a total of three weeks at the Muscatatuck Urban Training Center (MUTC) with a team of over 100 actors performing in various scenarios. The fields of view, both overlapping and non-overlapping, capture person and vehicle activities in indoor and outdoor environments. There were multiple realistic scenarios with a variety of scripted and non-scripted activities.

The camera infrastructure included commercial-off-the-shelf EO cameras; thermal infrared cameras as part of several IR-EO pairs; two DJI Inspire 1 v2 drones, and a range of still images from handheld cameras.

The actors were also carrying GPS loggers; see here for more details.

Montage of randomly selected KF1 ground camera clips (re-encoded for accelerated playback.)
Montage of randomly selected KF1 UAV video (re-encoded for accelerated playback.)

Visualization of the fine-grained MUTC 3D model.

Annotation sample (accelerated for display.)



All MEVA data is available for use under a CC BY-4.0 license; the general MEVA data license is available here.

Video data on AWS

MEVA video data is hosted on an Amazon Web Services (AWS) S3 bucket; download is provided at no cost via sponsorship through Amazon's AWS Public Dataset Program.

As of 12 Dec 2019, the video corpus includes:
  • 328 hours (516GB) of ground camera data
  • 4.6 hours (26GB) of UAV data

Click here for download instructions.

MEVA data git repository

The MEVA data git repository is an evolving collection of metadata and annotations released for the MEVA KF1 data. Highlights include:



A subset of the MEVA KF1 data has been annotated for the activities defined in the NIST ActEV Challenge. There are annotations for 22.1 hours (266 five-minute video clips) of data. The released annotations were generated by the same workflow used to produce the sequestered annotations used on the ActEV Sequestered Data Leaderboard. Resources include:

Annotating MEVA

The MEVA data can be annotated using your preferred annotation toolchain. For annotating and using the MEVA data, the following steps are recommended:

Download data

See the instructions here to obtain the data.

View annotation exemplars

Download and review short clips of visualized annotations for each activity type.

Download exemplars

Review Annotation Guidelines

Download the current activity definitions. These should guide which activities and objects are annotated.

Download guidelines

Generate annotations

Generate schema like these based on the format described here as part of our annotation git repository.

Contribute annotations

Contributing your annotations will increase the utility of the MEVA KF1 dataset for everyone. Please clone our annotation git repository and file a merge request to have your annotations pooled back into the master branch.