The MSD challenge tests the generalisability of machine learning algorithms when applied to 10 different semantic segmentation tasks. The aim is to develop an algorithm or learning system that can solve each task, separateley, without human interaction. This can be acheived through the use of a single learner, an ensable of multiple learners, architecture search, curriculum learning, or any other technique, as long as task-specific model parameters are not human-defined.
Data for the 10 tasks can be downloaded here. Participants are expected to submit the segmentation results through this website. There will be a monthly ranking calculation and the results will be updated accordingly.
- Each team is only allowed 1 submission per day. Teams cannot register themselves multiple times to avoid this limitation. If you have created multiple teams, please notify us be emailing firstname.lastname@example.org so that we can delete your previous submissions. Failure to disclose multiple registrations will not be tolerated, and your participation in the competition will be terminated.
- Submissions take almost 1h to go through the validation script, and only one validation machine is available. The system works on a first-in-first-out manner.
- We would like to note that, as described in the challenge proposition, teams cannot manually tweak parameters of algorithms/models on a task specific basis. Any parameter tuning has to happen automatically and algorithmically. As an example, the learning rate or the depth of a network cannot be manually changed between tasks, but they can be found automatically through cross-validation. Any team which is found to use different human-defined and task-specific parameters will be terminated. If you are in doubt if your algorithm classifies as “algorithmically optimised”, please email us at email@example.com to confirm.