The Multiview Extended Video with Activities (MEVA) dataset

The large-scale MEVA dataset is designed for activity detection in multi-camera environments. It was created on the Intelligence Advanced Research Projects Activity (IARPA) Deep Intermodal Video Analytics (DIVA) program to support DIVA performers and the broader research community.

NEWS:
16 December 2022: MEVID: Multi-view Extended Videos with Identities for Video Person Re-Identification is released! We've developed an additional layer of annotations for person re-identification on MEVA video. Additional information and a link to our WACV23 paper may be found below.

About MEVA data

MEVA aims to build a corpus of activity video collected from multiple viewpoints in realistic settings.

There is a MEVA data users Google group to facilitate communication and collaboration for those interested in working with the data. Join the conversation through the meva-data-users group.

Citing MEVA

The dataset is described in our WACV 2021 paper. The bibtex citation is:

@InProceedings{Corona_2021_WACV,
    author    = {Corona, Kellie and Osterdahl, Katie and Collins, Roderic and Hoogs, Anthony},
    title     = {MEVA: A Large-Scale Multiview, Multimodal Video Dataset for Activity Detection},
    booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
    month     = {January},
    year      = {2021},
    pages     = {1060-1068}
}
            

Releases

Known Facility Release #1 ("KF1"):

The KF1 data was collected over a total of three weeks at the Muscatatuck Urban Training Center (MUTC) with a team of over 100 actors performing in various scenarios. The fields of view, both overlapping and non-overlapping, capture person and vehicle activities in indoor and outdoor environments. There were multiple realistic scenarios with a variety of scripted and non-scripted activities.

The camera infrastructure included commercial-off-the-shelf EO cameras; thermal infrared cameras as part of several IR-EO pairs; two DJI Inspire 1 v2 drones, and a range of still images from handheld cameras.

The actors were also carrying GPS loggers; see here for more details.

Ground Truth Visualizations

Visualizations of MEVA ground truth are available via our DIVE analytics toolchain. Create an account and click here to view MEVA video and its associated ground truth.

Montage of randomly selected KF1 ground camera clips (re-encoded for accelerated playback.)
Montage of randomly selected KF1 UAV video (re-encoded for accelerated playback.)

MEVID Person Re-Identification Data

We're excited to release MEVID: Multi-view Extended Videos with Identities for Video Person Re-Identification. This additional layer of annotation on MEVA video provides:
  • An additional 289 clips (approximately 24 hours) of previously unreleased MEVA video
  • 158 identities, with an average of four outfits per identity
  • 33 viewpoints
  • 17 locations
  • over 1.7M bounding boxes
  • over 10.46M frames
Baseline results and our annotation toolchain are also provided.

Obtaining MEVID

Instructions for downloading MEVID annotations and supporting video may be found on https://github.com/Kitware/mevid.

Citing MEVID

The dataset is described in our paper, MEVID: Multi-view Extended Videos with Identities for Video Person Re-Identification, due to appear in WACV 2023. The bibtex citation is:

@InProceedings{Davila_MEVID_2023,
    author = {Davila, Daniel and Du, Dawei and Lewis, Bryon and Funk, Christopher and Van Pelt, Joseph and Collins, Roderic and Corona, Kellie and Brown, Matt and McCloskey, Scott and Hoogs, Anthony and Clipp, Brian},
    title = {MEVID: Multi-view Extended Videos with Identities for Video Person Re-Identification},
    booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
    month     = {January},
    year      = {2023}
}
          
MEVID example

Actor checkin photos (top row) are associated with tracklets from MEVA videos (middle and bottom rows) to create global IDs.

Visualization of the fine-grained MUTC 3D model.

Annotation sample (accelerated for display.)

ACCESSING AND USING MEVA

License

All MEVA data is available for use under a CC BY-4.0 license; the general MEVA data license is available here.

Video data on AWS

MEVA video data is hosted on an Amazon Web Services (AWS) S3 bucket; download is provided at no cost via sponsorship through Amazon's AWS Public Dataset Program.

As of 12 Dec 2019, the video corpus includes:
  • 328 hours (516GB) of ground camera data
  • 4.6 hours (26GB) of UAV data

Click here for download instructions.

MEVA data git repository

The MEVA data git repository is an evolving collection of metadata and annotations released for the MEVA KF1 data. Highlights include:

Metadata

Annotations

The MEVA KF1 data is being annotated for the activities defined in the NIST ActEV Challenge. The activity definitions are available, as are renderings of activity exemplars.

There are several annotation efforts:

Annotating MEVA

The MEVA data can be annotated using your preferred annotation toolchain. For annotating and using the MEVA data, the following steps are recommended:

Download data

See the instructions here to obtain the data.

View annotation exemplars

Download and review short clips of visualized annotations for each activity type.

Download exemplars

Review Annotation Guidelines

Download the current activity definitions. These should guide which activities and objects are annotated.

Download guidelines

Generate annotations

Generate schema like these based on the format described here as part of our annotation git repository.

Contribute annotations

Contributing your annotations will increase the utility of the MEVA KF1 dataset for everyone. Please clone our annotation git repository and file a merge request to have your annotations pooled back into the master branch.