Challenge

Leaderboards

Our competition was held from Jun. to Oct. 2021 and included two phases that each remained for two months. During the whole period, we received 1628 submissions from 85 teams, and the final ranks included two sub-tracks named 'Main Result' and 'SFR Result'. We listed below the top-3 ranks (with their organization info) for those two sub-tracks here.

Main Result

1st place

Team name:agir

Organization:SenseTime Group Limited - Base Mode

2nd place

Team name:Ethan.y

Organization:Face Group (David), AI Lab, Kakao Enterprise

3rd place

Team name:mind_ft

Organization:Alibaba DAMO Academy & NUS & NTU & AMS & CASIA

SFR Result

1st place

Team name:mind_ft

Organization:Alibaba DAMO Academy & NUS & NTU & AMS & CASIA

2nd place

Team name:victor-2021

Organization:NTU & ZJU & NUS & CASIA & Shenzhen Technology University

2nd place

Team name:HYL_Dave

Organization:Toppan

3rd place

Team name:Hukangli

Organization:Shenzhen Deepcam Information Technologies & LinkSprite Technologies, USA

WebFace260M Track of ICCV21-MFR

The Masked Face Recognition Challenge & Workshop(MFR) will be held in conjunction with the International Conference on Computer Vision (ICCV) 2021.

There're WebFace260M track here and InsightFace track in this workshop.

Traditionally, face recognition systems are presented with mostly non-occluded faces, which include primary facial features such as the eyes, nose, and mouth. However, there are a number of circumstances in which faces are occluded by masks such as in pandemic, medical settings, excessive pollution, or laboratories. During the COVID-19 coronavirus epidemic, almost everyone wears a facial mask, which poses a huge challenge to face recognition. For instance, a person wearing a mask attempts to authenticate against a prior visa or passport photo at the airport. Traditional face recognition systems may not effectively recognize the masked faces, but removing the mask for authentication will increase the risk of virus infection. To cope with the above-mentioned challenging scenarios arising from wearing masks, it is crucial to improve the existing face recognition approaches. Recently, some commercial providers have announced the availability of face recognition algorithms capable of handling face masks, and an increasing number of research publications have surfaced on the topic of face recognition on people wearing masks. However, due to the sudden outbreak of the epidemic, there are yet no publicly available masked face recognition benchmark. In this WebFace260M Track of ICCV21-MFR, we have developed a comprehensive benchmark for evaluating both standard and masked face recognition.

Rules (tentative):

Please refer to https://arxiv.org/abs/2108.07189 for the training data, evaluation protocols, submission rules, test set and metric in SFR and MFR, ranking criterion, baseline solution, and preliminary competition results. Note: The full WebFace260M data has been open for all applicants, as long as their agreements are qualified.

Mask data-augmentation is allowed, for example this. The applied mask augmentation tool should be reproducible.

External dataset and pretrained models are both prohibited in second phase.

Participants submit onnx model, then get scores by our online evaluation. Test images are invisible during challenges.

WebFace260M Track adopts FRUITS (the Face Recognition Under Inference Time conStraint) protocol (1000 ms constrain for whole face recognition system, inference time is measured on a single core of an Intel Xeon CPU E5-2630-v4@2.20GHz processor. Please see our paper for details). Only these solutions are qualified for awards.

To avoid overfitting on masked/standard face recognition, we decide to revise the formula for calculating all three MFR metrics in the leaderboard. New MFR metrics are designed to show a weighted sum to consider both masked and standard faces at the same time. However, because of some limitations of the Codalab, we can not modify the leaderboard's headers and contents. So we will keep the existed leaderboard's structure, but replace the MFR metrics with new values instead. The new formulas are shown below.

(1) New All-Masked (MFR) = 0.25 * Old All-Masked (MFR) + 0.75 * All (SFR)

(2) New Wild-Masked (MFR) = 0.25 * Old Wild-Masked (MFR) + 0.75 * Wild (SFR)

(3) New Controlled-Masked (MFR) = 0.25 * Old Controlled-Masked (MFR) + 0.75 * Controlled (SFR)

Top-ranked participants should provide their solutions and codes to ensure their validity after submission closed.

Submission Guide

Please refer to https://github.com/WebFace260M/webface260m-iccv21-mfr for the submission package and details.
Submission server link: https://competitions.codalab.org/competitions/32478

1.Participants should put all models and files into $MFR_ROOT/assets/.
2.Participants must provide $MFR_ROOT/pywebface260mmfr_implement.py which contains the PyWebFace260M class.
3.Participants should run the demo_feat.py in $MFR_ROOT/demo/ on the provided docker file to ensure the correctness of feature and time constraints.
4.Participants must package the code directory for submission using zip -r xxx.zip $MFR_ROOT and then upload it to codalab.
5.Please sign-up with the real organization name. You can hide the organization name in our system if you like.
6.You can decide which submission to be displayed on the leaderboard.

Test Set for WebFace260M Track

Since public evaluations are most saturated and may contain noise, we manually construct an elaborated test set. It is well known that recognizing strangers, especially when they are similar-looking, is a difficult task even for experienced vision researchers. Therefore, our multi-ethnic annotators only select their familiar celebrities, which ensure the high-quality of the test set. Besides, annotators are encouraged to gather attribute-balanced faces, and recognition models are introduced to guide hard sample collection. The statistics of the final test set are listed in below table. In total, there are 60926 faces of 2478 identities. Rich attributes (e.g. age, gender, race, controlled, wild, masked) are accurately annotated.

The # identities and # faces statistics of our test set.

The samples of out test set. The left, middle and right three columns of faces have attributes of controlled, wild and masked respectively.

Organizers

Zheng Zhu, Guan Huang, Jiankang Deng, Yun Ye, Junjie Huang, Xinze Chen, Jiagang Zhu, Tian Yang, Jia Guo, Jiwen Lu, Dalong Du and Jie Zhou

WeChat discussion group:

info@face-benchmark.org Copyright © XForward AI Technology Co.,LTD. 2018-2021