Visual Domain Adaptation Challenge
(VisDA-2020)
[News] [Overview] [Data and Code] [Prizes] [Evaluation] [Rules] [FAQ] [TASK-CV Workshop] [Organizers] [Sponsors]News
Introducing the 2020 VisDA Challenge! This year we focus on domain adaptive instance retrieval, where the source and target domains have completely different classes (instance IDs). The particular task is to retrieve the pedestrian instances of the same ID as the query image. This problem is significantly different from previous VisDA challenges, where the source and target domains have at least some identical classes. Moreover, ID matching depends on fine-grained details, making the problem harder than before. For details about last year's challenges and winning methods, see VisDA [2017] [2018] [2019] pages.
Important Announcement: Consistent with [VisDA 2019] challenge, this year's challenge winners will be required to release a six-page technical report (ECCV paper format) and code to replicate their results. See the corresponding rules section for details.
- Aug 25 We finalized challenge winners. Congratulations to the challenge winners!
- Jun 25 We have released the test set and the test phase begins
- May 15 Evaluation server online now
- May 1 We have released the training and validation sets and the validation phase begins
- Apr 1 Registration starts
Winners of VisDA-2020 challenge
# | Team Name | Affiliation | mAP |
---|---|---|---|
1 | Vimar | Zhejiang University and Alibaba Group | 76.56 [codes and report] |
2 | Yxge | Chinese University of Hong Kong | 74.78 [codes and report] |
3 | Xiangyu | Ruiyan Technology | 72.39 [codes and report] |
Overview
We are pleased to announce the 2020 Visual Domain Adaptation (VisDA2020) Challenge! It is well known that the success of machine learning methods on visual recognition tasks is highly dependent on access to large labeled datasets. Unfortunately, performance often drops significantly when the model is presented with data from a new deployment domain which it did not see in training, a problem known as dataset shift. The VisDA challenge aims to test domain adaptation methods’ ability to transfer source knowledge and adapt it to novel target domains.
The competition will take place during May -- July 2020, and the top performing teams will be invited to present their results at the workshop at ECCV 2020 in September, Glasgow. This year’s challenge focuses on Domain Adaptive Pedestrian Re-identification.
Sponsors
Prizes
The top three teams will receive prizes:
- 1st place: 1000 USD + Certificate
- 2nd place: 600 USD + Certificate
- 3rd place: 400 USD + Certificate
Evaluation
We will use CodaLab to evaluate submissions and maintain a leaderboard. To register for the evaluation server, please create an account on CodaLab and enter as a participant in the following competition:
If you are working as a team, you have the option to register for one account for your team or register multiple accounts under the same team name. If you choose to use one account, please indicate the names of all of the members on your team. This can be modified in the “User Settings” tab. If your team registers for multiple accounts, please do so using the protocol explained by CodaLab here. Regardless of whether you register for one or multiple accounts, your team must adhere to the per-team submission limits (20 entries per day per team during the validation phase).
Rules
The VisDA challenge tests adaptation and model transfer, so the rules are different than most challenges. Please read them carefully.
Supervised Training: Teams may only submit test results of models trained on the source domain data. To ensure equal comparison, we also do not allow any other external training data, modifying the provided training dataset, or any form of manual data labeling.
Unsupervised training: Models can be adapted (trained) on the target domain (using the provided target training set) in an unsupervised way, i.e. without labels.
Source Models: The performance of a domain adaptation algorithm greatly depends on the baseline performance of the model trained only on source data. We ask that teams submit two sets of results: 1) predictions obtained only with the source-trained model, and 2) predictions obtained with the adapted model. See the development kit for submission formatting details.
Leaderboard:The main leaderboard for each competition track will show results of adapted models and will be used to determine the final team ranks. The expanded leaderboard will additionally show the team's source-only models, i.e. those trained only on the source domain without any adaptation. These results are useful for estimating how much the method improves upon its source-only model, but will not be used to determine team ranks.
Rank: The final rank will be determined by the overall accuracy on the target test set. The evaluation metrics used to rank the performance of each team will be mean Average Precision (mAP) and Cumulated Matching Characteristics (CMC) curve.
Additional Datasets: Teams that wish to be listed in the leaderboard and win the challenge awards are NOT allowed to use any external data for either training or validation. The winning teams are required to submit their training and testing codes for verification after the challenge submission deadline in order to ensure that no external data was used for training.
FAQ
- Can we train models on data other than the source domain?
- In unsupervised pedestrian re-identification domain adaptation challenge, can we use testing split of the given data to tune the parameter?
- Can we assign pseudo labels to the unlabeled data in the target domain?
- Can we use personX_spgan in the challenge??
- Can we use target_validation or target_test for training without using their labels?
- Can we use target_validation or target_test for re-ranking?
- Can we use camera index of target_train for training?
- Can we use camera index of target_validation or target_test for re-ranking during evaluation?
- Do we have to use the provided baseline models?
- How many submissions can each team submit per competition track?
- Can multiple teams enter from the same research group?
- Can external data be used?
- Are challenge participants required to reveal all details of their methods?
- Do participants need to adhere to TASK-CV abstract submission deadlines to participate in the challenge?
Participants may elect to pre-train their models only on ImageNet.
No, in the training phase, only the training split can be used to train the model. In other words, utilizing the testing split from the source domains or target domain is prohibited.
Assigning pseudo labels for the target training set is allowed as long as no human labeling is involved. Please DO NOT assign pseudo labels for the test set or validation. (There are three sets in the target domain: target training, target validation, and target test)
Yes. SPGAN is an unsupervised image-level alignment method, please feel free to use it.
No. Don't use target validation/test_set for training. only Target training set can be used for training.
Yes. Re-ranking is a post-processing technique.
Yes. We already provided camera indices of target training samples
Yes. Please note that camera index of target test set is not available. Maybe you need to train an auxiliary model to predict camera index.
No, these are provided for your convenience and are optional.
For the validation domain, the number of submissions per team is limited to 20 upload per day and there are no restrictions on total number of submissions. For the test domain, the number of submissions per team is limited to 1 upload per day and 20 uploads in total. Only one account per team must be used to submit results. Do not create multiple accounts for a single project to circumvent this limit, as this will result in disqualification.
Yes, so long as each team is comprised of different members.
The source domain is a synthetic dataset simulated from Person X. Optional initialization of models with weights pre-trained on ImageNet is allowed and must be declared in the submission. Please see the challenge rules for more details.
Yes! The top performing teams are required to include a four+ page write-up regarding their methods and code to reproduce their results to the claim the victory. The detailed procedure for releasing the code is to be determined.
Submission of a TASK-CV workshop abstract is not mandatory to participate in the challenge; however, we request that any teams that wish to be considered for prizes or receive invitation to speak at the workshop submit a 6-page abstract (ECCV paper format) directly via email to visda2020-organizers@googlegroups.com, within 1 week of the challenge end. The top-performing teams that submit abstracts will be invited to present their approaches at the workshop.