See Anatomy3 Continuous for the currently running Anatomy Segmentation Benchmark.
This page describes the Anatomy3 Benchmark as it was run in 2014-2015.
Real-time result updates on Leaderboard is now online!
The results from the Anatomy benchmarks have been published in the VISCERAL Anatomy IEEE TMI paper.
Click for more details on the segmentation challenge at ISBI
In this challenge, a set of annotated medical imaging data is provided to the participants, along with a powerful complimentary cloud-computing instance (8-core CPU with 16GB RAM) where participant algorithms can be developed and evaluated. The available data contains segmentation of several different anatomical structures in different image modalities, e.g. CT and MRI. The participants, however, do NOT have to address all the tasks involved in such data, but rather they can attempt any sub-problem thereof. For instance, an algorithm that can segment all organs in all the modalities will be evaluated in those given categories for which it outputs any results. In other words, we will present a per-anatomy, per-modality evaluation result depending on the nature of participating algorithms and the attempted image analysis tasks. Indeed, the vision of VISCERAL is to create a single, large, and multi-purpose medical image dataset, on which different research groups can test their specific applications and solutions.
Annotated structures found in the training data corpus:
- Segmentations: left/right kidney, spleen, liver, left/right lung, urinary bladder, rectus abdominis muscle, 1st lumbar vertebra, pancreas, left/right psoas major muscle, gallbladder, sternum, aorta, trachea, left/right adrenal gland.
How to participate
- Register for a benchmark account at the VISCERAL registration website. Choose "Anatomy 3 Benchmark" and your choice of operating system (Linux, Windows, etc) for your virtual machine (VM).
- Download the data usage agreement, get it signed, and upload it to the participant dashboard .
- After signing the contract you can access the training dataset via FTP and download it for offline training.
- Install your algorithms in the virtual machine, while adapting and testing them on the training data. Take a look at the Anatomy3 Guidelines for Participation (currently v1.0 of 20141118) for information on doing this.
- Prepare your excutable on your VM according to the announced input/output specifications.
- Submit your VM (through "Submit VM" button in the dashboard) for the evaluation on the test data.
- This evaluation process can be performed iteratively during the training phase.
- The participants receive feedback from their evaluations and have the option to make results publicly available.
- Results published in the Leaderboard before the challenge deadline will be used for the final evaluation.
Meanwhile, the participants will explain their methodology used for the challenge at a challenge presentation and in a short paper format, which will be compiled and published online in a challenge proceedings.
Note that the Anatomy3 round of the VISCERAL Anatomy series of benchmarks will not offer a landmark detection task. We hope to include this task again in future rounds.
|Mid November||Registration opens.|
|Mid November||Training dataset is available.|
|19th December||First leaderboard publication.|
|March 30th 2015
||Submission deadline for results that will be presented at the ISBI event.|
→ If you wish to present your results at the challenge:
• A short abstract/PDF will be required to contribute to the challenge proceedings. Details on the submission method are given on the Workshop page.
Virtual Machines in the Cloud
The vision of the VISCERAL project is the automatic annotation and evaluation of very large datasets on the cloud. Accordingly, it requires the participants provide us with compiled annotation executables, which are then run in the evaluation phase on the test data by us in a VM. This is NOT a requirement for this ISBI challenge, which will only take annotation volumes from the participants (not algorithm executables). Nonetheless, following the general file access and VM evaluation guidelines is highly recommended to the participants such that they can easily participate in our continued set of benchmarks with growing amount of available data.
Having completed the registration and submitted the signed data agreement form, the training images and annotations are available for access via FTP for offline training .
Images: 20 volumes each for 4 different image modalities and field-of-views, with and without contrast enhancement, which add up to 80 volumes in total.
Annotations: In each volume, up to 20 structures are segmented. The missing annotations are due to poor visibility of the structures in certain image modalities or due to such structures being outside the field-of-view. Accordingly, in all 80 volumes, a total of 1295 structures are segmented. A breakdown of annotations per anatomy can be seen in figures linked on the left.
Test Dataset and Evaluation
Note that the test data will NOT be accessible directly by the participants. Instead, the participants provide us with their compiled annotation executable that can be called in a pre-defined manner to produce results for any input image. The organizers will use those in the VM environment to annotate/evaluate a large set of medical testing images.
To keep up to date with the latest news on VISCERAL benchmarks, subscribe to the VISCERAL mailing list.
For specific questions regading this ISBI challenge, you may contact Antonio Foncubierta.