Anatomy3 continuous evaluation

***NOTICE: VISCERAL cloud services had to be turned down due to the lack of funding 

 

Evaluation results on the test data can be obtained and published at any time in the participant Leaderboard. The latest results from the Anatomy benchmarks are published in the VISCERAL Anatomy IEEE TMI paper.

Online result updates are on the Leaderboard

Leaderboard ISBI2015 snapshot

Leaderboard VISCERAL ISBI 2015 snapshot 

4 imaging modalities (CT, MR enhanced and unenhanced) with up to 20 manually annotated anatomical structures in the Anatomy training data set:

Left kidney Left lung Urinary bladder 1st lumbar vertebra
Right kidney Right lung  Gallbladder Left rectus adominis mucle
Spleen Trachea Thyroid Right rectus abdominis muscle
Liver Aorta Left adrenal gland Left psoas major muscle
Pancreas Sternum Right adrenal gland Right psoas major muscle

Task Description

A set of annotated medical imaging data is provided to the participants, along with a powerful complimentary cloud-computing instance (8-core CPU with 16GB RAM) where participant algorithms can be developed and evaluated. The available data contains the manual annotations of several anatomical structures in different imaging modalities, e.g. CT and MR.  The participants can select single or multiple segmentation tasks involved in the VISCERAL data set. For instance, an algorithm that can segment some organs for a given modality will be evaluated in the categories for which it outputs any results. In other words, we will present a per-anatomy, per-modality evaluation result depending on the nature of participating algorithms and the attempted image analysis tasks. Indeed, the vision of VISCERAL is to create a single, large, and multi-purpose medical image dataset, on which different research groups can test their specific applications and solutions. An overview of the Anatomy1, Anatomy2 and Anatomy3 continuos dataset, infrastructure and benchmark results can be found in the VISCERAL Anatomy IEEE TMI paper.

Training Data set

trainingSetAnnotationsGRAPH

Images: 20 volumes each for 4 different imaging modalities and field-of-views, with and without contrast enhancement, which add up to 80 volumes in total:

  • Unenhanced whole body CT
  • Contrast enhanced abdomen and thorax CT
  • Unenhanced whole body MR T1
  • Contrasted enhanced abdomen MR T1

Annotations: In each volume, up to 20 structures are segmented. The missing annotations are due to poor visibility of the structures in certain imaging modalities or due to such structures being outside the field-of-view. Accordingly, in all 80 volumes, a total of 1295 structures are segmented. A breakdown of annotations per anatomy can be seen in figures linked on the left.

Test Data set and Evaluation

Note that the test data will NOT be accessible directly by the participants. Instead, the participants provide us with their compiled annotation executable that can be called in a pre-defined manner to produce results for any input image. The organizers will use those in the VM environment to annotate/evaluate a large set of medical testing images.

 

How to participate

1.  Register for a benchmark account at the VISCERAL registration website.  Choose "Anatomy 3 Benchmark" and your choice of operating system (Linux, Windows, etc) for your virtual machine (VM).

2.  Download the data usage agreement, get it signed, and upload it to the participant dashboard .

3.  After signing the contract you can access the training dataset via FTP and download it for offline training.

4.  Install your algorithms in the virtual machine, while adapting and testing them on the training data. Take a look at the Anatomy3 Guidelines for Participation (currently v2.0 of 20151906) for information on doing this.

5.  Prepare your excutable on your VM according to the announced input/output specifications.

6.  Submit your VM (through "Submit VM" button in the dashboard) for the evaluation on the test data (at most once every seven days). 

7.  The participants receive feedback from their evaluations and have the option to make results publicly available.

8.  This evaluation process can be performed iteratively. 

Note that the Anatomy3 round of the VISCERAL Anatomy series of benchmarks will not offer a landmark detection task. We hope to include this task again in future rounds.

Virtual Machines in the Cloud

The vision of the VISCERAL project is the automatic annotation and evaluation of very large datasets on the cloud. Accordingly, it requires the participants provide us with compiled annotation executables, which are then run in the evaluation phase on the test data by us in the provided VM. Nonetheless, following the general file access and VM evaluation guidelines is highly recommended to the participants such that they can easily participate in our continued set of benchmarks with growing amount of available data.

Having completed the registration and submitted the signed data agreement form, the training images and annotations are available for access via FTP for offline training . 

Organizers

The Anatomy3 challenge is organized by the VISCERAL Consortium.

To keep up to date with the latest news on VISCERAL benchmarks, subscribe to the VISCERAL mailing list.

For specific questions regading this benchmark, you may contact Antonio Foncubierta.