Visual Domain Adaptation Challenge

(VisDA-2020)


[News] [Overview] [Data and Code] [Prizes] [Evaluation] [Rules] [FAQ] [TASK-CV Workshop] [Organizers] [Sponsors]

News

Introducing the 2020 VisDA Challenge! This year we focus on domain adaptive instance retrieval, where the source and target domains have completely different classes (instance IDs). The particular task is to retrieve the pedestrian instances of the same ID as the query image. This problem is significantly different from previous VisDA challenges, where the source and target domains have at least some identical classes. Moreover, ID matching depends on fine-grained details, making the problem harder than before. For details about last year's challenges and winning methods, see VisDA [2017] [2018] [2019] pages.

Important Announcement: Consistent with [VisDA 2019] challenge, this year's challenge winners will be required to release a six-page technical report (ECCV paper format) and code to replicate their results. See the corresponding rules section for details.

  • Aug 25 We finalized challenge winners. Congratulations to the challenge winners!
  • Jun 25 We have released the test set and the test phase begins
  • May 15 Evaluation server online now
  • May 1 We have released the training and validation sets and the validation phase begins
  • Apr 1 Registration starts

Winners of VisDA-2020 challenge

Domain Adaptive Pedestrian Re-identification [full leaderboard]

# Team Name Affiliation mAP
1 Vimar Zhejiang University and Alibaba Group 76.56 [codes and report]
2 Yxge Chinese University of Hong Kong 74.78 [codes and report]
3 Xiangyu Ruiyan Technology 72.39 [codes and report]

Overview

We are pleased to announce the 2020 Visual Domain Adaptation (VisDA2020) Challenge! It is well known that the success of machine learning methods on visual recognition tasks is highly dependent on access to large labeled datasets. Unfortunately, performance often drops significantly when the model is presented with data from a new deployment domain which it did not see in training, a problem known as dataset shift. The VisDA challenge aims to test domain adaptation methods’ ability to transfer source knowledge and adapt it to novel target domains.


The competition will take place during May -- July 2020, and the top performing teams will be invited to present their results at the workshop at ECCV 2020 in September, Glasgow. This year’s challenge focuses on Domain Adaptive Pedestrian Re-identification.


Sponsors



Prizes

The top three teams will receive prizes:

  • 1st place: 1000 USD + Certificate
  • 2nd place: 600 USD + Certificate
  • 3rd place: 400 USD + Certificate

Evaluation

We will use CodaLab to evaluate submissions and maintain a leaderboard. To register for the evaluation server, please create an account on CodaLab and enter as a participant in the following competition:

If you are working as a team, you have the option to register for one account for your team or register multiple accounts under the same team name. If you choose to use one account, please indicate the names of all of the members on your team. This can be modified in the “User Settings” tab. If your team registers for multiple accounts, please do so using the protocol explained by CodaLab here. Regardless of whether you register for one or multiple accounts, your team must adhere to the per-team submission limits (20 entries per day per team during the validation phase).


Rules

The VisDA challenge tests adaptation and model transfer, so the rules are different than most challenges. Please read them carefully.

Supervised Training: Teams may only submit test results of models trained on the source domain data. To ensure equal comparison, we also do not allow any other external training data, modifying the provided training dataset, or any form of manual data labeling.

Unsupervised training: Models can be adapted (trained) on the target domain (using the provided target training set) in an unsupervised way, i.e. without labels.

Source Models: The performance of a domain adaptation algorithm greatly depends on the baseline performance of the model trained only on source data. We ask that teams submit two sets of results: 1) predictions obtained only with the source-trained model, and 2) predictions obtained with the adapted model. See the development kit for submission formatting details.

Leaderboard:The main leaderboard for each competition track will show results of adapted models and will be used to determine the final team ranks. The expanded leaderboard will additionally show the team's source-only models, i.e. those trained only on the source domain without any adaptation. These results are useful for estimating how much the method improves upon its source-only model, but will not be used to determine team ranks.

Rank: The final rank will be determined by the overall accuracy on the target test set. The evaluation metrics used to rank the performance of each team will be mean Average Precision (mAP) and Cumulated Matching Characteristics (CMC) curve.

Additional Datasets: Teams that wish to be listed in the leaderboard and win the challenge awards are NOT allowed to use any external data for either training or validation. The winning teams are required to submit their training and testing codes for verification after the challenge submission deadline in order to ensure that no external data was used for training.


FAQ

  1. Can we train models on data other than the source domain?
  2. Participants may elect to pre-train their models only on ImageNet.

  3. In unsupervised pedestrian re-identification domain adaptation challenge, can we use testing split of the given data to tune the parameter?
  4. No, in the training phase, only the training split can be used to train the model. In other words, utilizing the testing split from the source domains or target domain is prohibited.

  5. Can we assign pseudo labels to the unlabeled data in the target domain?
  6. Assigning pseudo labels for the target training set is allowed as long as no human labeling is involved. Please DO NOT assign pseudo labels for the test set or validation. (There are three sets in the target domain: target training, target validation, and target test)

  7. Can we use personX_spgan in the challenge??
  8. Yes. SPGAN is an unsupervised image-level alignment method, please feel free to use it.

  9. Can we use target_validation or target_test for training without using their labels?
  10. No. Don't use target validation/test_set for training. only Target training set can be used for training.

  11. Can we use target_validation or target_test for re-ranking?
  12. Yes. Re-ranking is a post-processing technique.

  13. Can we use camera index of target_train for training?
  14. Yes. We already provided camera indices of target training samples

  15. Can we use camera index of target_validation or target_test for re-ranking during evaluation?
  16. Yes. Please note that camera index of target test set is not available. Maybe you need to train an auxiliary model to predict camera index.

  17. Do we have to use the provided baseline models?
  18. No, these are provided for your convenience and are optional.

  19. How many submissions can each team submit per competition track?
  20. For the validation domain, the number of submissions per team is limited to 20 upload per day and there are no restrictions on total number of submissions. For the test domain, the number of submissions per team is limited to 1 upload per day and 20 uploads in total. Only one account per team must be used to submit results. Do not create multiple accounts for a single project to circumvent this limit, as this will result in disqualification.

  21. Can multiple teams enter from the same research group?
  22. Yes, so long as each team is comprised of different members.

  23. Can external data be used?
  24. The source domain is a synthetic dataset simulated from Person X. Optional initialization of models with weights pre-trained on ImageNet is allowed and must be declared in the submission. Please see the challenge rules for more details.

  25. Are challenge participants required to reveal all details of their methods?
  26. Yes! The top performing teams are required to include a four+ page write-up regarding their methods and code to reproduce their results to the claim the victory. The detailed procedure for releasing the code is to be determined.

  27. Do participants need to adhere to TASK-CV abstract submission deadlines to participate in the challenge?
  28. Submission of a TASK-CV workshop abstract is not mandatory to participate in the challenge; however, we request that any teams that wish to be considered for prizes or receive invitation to speak at the workshop submit a 6-page abstract (ECCV paper format) directly via email to visda2020-organizers@googlegroups.com, within 1 week of the challenge end. The top-performing teams that submit abstracts will be invited to present their approaches at the workshop.



Organizers

Kate Saenko (Boston University), Liang Zheng (Australian National University), Xingchao Peng (Boston University), Weijian Deng (Australian National University)


Broader Impact

This competition is featured by learning from synthetic 3D person data. We are not only advancing state-of-the-art technologies in domain adaptation, metric learning and deep neural networks, but importantly aim to reduce system reliance on real-world datasets. While we evaluate our algorithms on real-world data, we have adopted strict measures to take care of the privacy issue. For example, all the faces have been blurred. The participants have signed to comply with our data protection agreement, where we have forbidden the posting or distribution of test images in papers or other public domains. We believe these measures will significantly improve data safety and privacy, while allowing researchers to develop useful technologies.