Overview

The goal of the competition is to provide an evaluation for state-of-the-arts on human identification at a distance (HID). The competition and workshop is endorsed by IAPR Technical Committee on Biometrics (TC4). The workshop will be hosted in conjunction with the Asian Conference on Computer Vision (ACCV 2020) from Nov 30 – Dec 4, 2020.

The winners will report their methods and results during the workshop. Some experts on HID will also be invited to give talks. The workshop will attract researchers on gait recognition and person identification to attend. We surely believe that the workshop will be successful and promote the research on HID.

The dataset proposed for the competition will be CASIA-E. It contains 1014 subjects and hundreds of video sequences for each subject. We randomly selected 10 sequences for each subject for the competition. We provide human body silhouettes, and the silhouettes have been normalized to a fixed size (128 x 128) for convenience.

The training set contains 500 subjects, the first 500 ones in the dataset. 25% of the sequences in the last 514 subjects will be put into the validation set. The remaining 75% sequences of the last 514 subjects will be the test set.

How to join this competition?

The competition is open to anyone who is concerned about biometric technology.

And it will be hosted in CodaLab HID 2020, where you can submit results and get timely feedback.

Important Dates

Deadline of first phase: October 15, 2020
Deadline of second phase: October 25, 2020
Competition results announcement: October 30, 2020
Method description submission: November 10, 2020
Workshop: Half day on December 3, 2020

Dataset

How to get the data set?

Various data set download options are provided below:

  1. OneDrive
  2. Google Drive.
  3. Baidu Drive password: 5pu7

The specific production process is: use cameras of different heights and viewing angles to collect video segments of people walking, and then obtain human silhouette images through human body detection and human body segmentation algorithms, and finally normalize these images to a uniform size.

In this competition, you are asked to predict the subject ID of the human walking video. The training set is available in the train/ folder, with corresponding subject ID in train.csv. During the test, the performance measurement method is the gallery-probe mode, which is commonly used in face recognition. Therefore, the test set consists of two parts, gallery set and probe set, which can be found in test_gallery/ and test_probe/ respectively. In addition, the subjects in the training set and the test set are completely different.

How to identify human in the test process?

Human body identification

Each subject in the test set has a video in the gallery set, which will be used as a template. And you are asked to predict the subject ID of the video in a probe set based on gallery data. The usual identification method is to calculate the L2 distance between the probe and the gallery. Of course, you can also use other methods to calculate the similarity between the two.

File descriptions

  • train.csv – the training set label contains the ID corresponding to the video in the training set
  • train/ – contains the training data, its file organization is ./train/subject_ID/video_ID/image_data.

  • test_probe/ – contains probe data, its file organization is ./test_probe/video_ID/image_data.

  • test_gallery/ – contains gallery data, its file organization is ./test_gallery/subject_ID/video_ID/image_data

  • SampleSubmission.zip – a sample submission. Note that the submission.csv must be placed in a zip archive and have the same file name. It Contains two columns of data, videoID, and subjectID. Every video in ./test_probe will require a prediction of subject ID. Finally, the result will be filled in this file. Note that the subjectID is int format.

Competition sample code

The sample code can be found at this Github project. It can achieve about 20% accuracy. And the model structure refers to the following paper, please cite this paper, if it help your research:

@article{zhang2019comprehensive,
title={A comprehensive study on gait biometrics using a joint CNN-based method},
author={Zhang, Yuqi and Huang, Yongzhen and Wang, Liang and Yu, Shiqi},
journal={Pattern Recognition},
volume={93},
pages={228--236},
year={2019},
publisher={Elsevier}
}

Committee

Advisory Committee

  • Prof. Tieniu Tan, Institute of Automation, Chinese Academy of Sciences, China
  • Prof. Yasushi Yagi, Osaka University, Japan
  • Prof. Mark Nixon, University of Southampton, UK

Organizers

  • Prof. Shiqi Yu, Southern University of Science and Technology, China
  • Prof. Liang Wang, Institute of Automation, Chinese Academy of Sciences, China
  • Prof. Yongzhen Huang, Institute of Automation, Chinese Academy of Sciences, China; Watrix technology co. ltd, China
  • Prof. Yasushi Makihara, Osaka University, Japan
  • Prof. Nicolás Guil, University of Málaga, Spain
  • Prof. Manuel J. Marín-Jiménez, University of Córdoba, Spain
  • Dr. Edel B. García Reyes, Shenzhen Institute of Artificial Intelligence and Robotics for Society, China
  • Prof. Feng Zheng, Southern University of Science and Technology, China
  • Prof. Md. Atiqur Rahman Ahad, University of Dhaka, Bangladesh; Osaka University, Japan

FAQ

Q: Can I use data outside of the training set to train my model?
A: Yes, you can. But you must describe what data you use and how to use it in the description of the method.

Q: How many members can my team have?
A: We do not limit the numbers in your team.

Q: Who cannot participate the competition?
A: The members in the organizers’ research group cannot participate the competition. The employees and interns at the sponsor company cannot participate the competition.

Q: Why are there empty folders in datasets?
A: Since the dataset generated randomly, it contains some empty files. You can refer the sample code to skip these folders.

Q: How to contact us?
A: For any request or information, please send an email to: Prof. Shiqi Yu, yusq@sustech.edu.cn

Leaderboard

Leaderboard

Acknowledgments

We would like to thank the Institute of Automation, Chinese Academy of Sciences for providing the dataset CASIA-E for the competition.