VISCERAL Anatomy2

Anatomy2 Results

The results are now available.

VISCERAL Organ Segmentation and Landmark Detection Benchmark

While a growing number of benchmark studies compare the performance of algorithms for automated organ segmentation or lesion detection in images with restricted field of views, no efforts have been made so far towards benchmarking these and related routines for the automated identification and segmentation of bones, inner organs and relevant substructures visible in images with wide-field of view, showing all of the abdomen, the trunk, or even the whole body.  In this benchmark we will evaluate related segmentation and detection algorithms on a large dataset of clinical wide-field-of-view MRI and CT scans - from abdomen, trunk and whole body - in which major organs and their substructure have been manually delineated, as well as specific anatomical interest points or landmarks.

Our data set comprises 3D images in four modalities: whole body CT and MRI T1, abdomen/thorax contrast enhanced CT and contrast enhanced MRI T1.  Ground truth annotations include up to 20 anatomical structures segmented in each volume, such as kidneys, lungs, liver, spleen, urinary bladder, pancreas, adrenal glands, tyroid, aorta, L1-vertebra, sternum, and some muscles.  Annotations also include up to 53 anatomical landmarks located in each volume, such as the lateral end of the clavicula, crista iliaca, symphysis, trochanter major/minor, tip of aortic arch, trachea/aortic bifurcation, crista iliaca, and the vertebrae.  

The benchmark will take place in a cloud environment provided and funded by the organizers, so as to satisfy the requirements of the ethical committee to not distribute the medical data, but have it stored in a single place. It succeeds an earlier benchmark (Anatomy1)  on the same task, but has more annotated data available and more computationally powerful virtual machines provided to the participants.  The participants submit their virtual machines with their installed programs to the organisers (propreitary source code can/should be removed).  The organizers then run the submitted programs on unseen test data and provide the results to the participants. Results will be presented and discussed at a final Benchmark workshop (location to be decided).

The final submissions will also be used to create a "silver corpus" of annotated data based on agreements between the submitted automated segmentations. This will result in a large collection of "reasonably annotated" data.  The volumes, along with the small manually annotated "gold corpus" and the large "silver corpus" will continue to be available as a resource to the community.

VISCERAL BM1b factsheet

Important dates

Jan 14, Tue √ Benchmark opens: Participants can register and submit the signed dataset agreement form.
Feb 3, Mon  Participants receive access to their cloud computing virtual machine and begin development.
Feb 12, Wed  Training dataset of 60 volumes is available.
May 6, Fri  Further 20 training volumes (from ISBI challenge) are released. See the challenge site.
Jun 16, Mon (midnight on
Baker Island, UTC-12)
Submission of virtual machines. All necessary executables must be in the cloud virtual machines. Press the "Submit VM" button on the participant dashboard once the VM is finalised.
Jun 30, Mon (midnight on
Baker Island, UTC-12)
Deadline for submission of papers to the VISCERAL Track of the MICCAI Medical Computer Vision Workshop (see submission instructions).
Jul 24, Thu Notification of acceptance for the VISCERAL Track
Jul 30, Wed Organizers finish evaluation and report results to the participants.
TBA Preparation starts for a workshop and joint publication based on the benchmark.

Click here for more info on our ISBI 2014 challenge on May 1st

Participating in this benchmark Anatomy2

  • Register for a benchmark account at the VISCERAL registration website.  Choose "Benchmark 1b" and your choice of operating system (Linux, Windows, etc) for your virtual machine (VM).
  • Download the data usage agreement, get it signed, and upload it to the participant dashboard in order to receive access to your VM and to the training data.
  • Read and follow the Anatomy2 Guidelines for Participation
  • Install your algorithms in the virtual machine, while adapting and testing them on the training data.
  • Optionally, you can participate in our challenge at the IEEE ISBI Apr 28-May 2 in Beijing, China.  Follow the instructions and deadlines here.
  • Prepare your excutable on your VM according to the announced input/outupt specifications.
  • By submission deadline, submit your VM (through "Submit VM" button in the dashboard) for the evaluation on the test data.
  • Submit a paper to a benchmark workshop (VISCERAL Track of the MICCAI Medical Computer Vision Workshop).
  • Contribute to a joint journal paper summarizing results.

visceralgroundtruth1Data sets and annotation tasks

The data sets used for the benchmark have been acquired during daily clinical routine work. Whole body MRI and CT scans or examinations of the whole trunk are used. Furthermore, imaging of the abdomen in MRI and contrast-enhanced CT for oncological staging purposes is used, since there is a higher resolution for segmentation especially of smaller inner organs, such as the adrenal glands. Organisers will make available manually annotated data created by radiologists, of which an example is shown on the right.  

Annotated structures found in the training data corpus: 

  • Segmentations: left/right kidney, spleen, liver, left/right lung, urinary bladder, rectus abdominis muscle, 1st lumbar vertebra, pancreas, left/right psoas major muscle, gallbladder, sternum, aorta, trachea, left/right adrenal gland.
  • Landmarks: Lateral end of clavicula, crista iliaca, symphysis below, trochanter major, trochanter minor, tip of aortic arch, trachea bifurcation, aortic bifurcation, crista iliaca

The numbers of such anotations available the current training set (for the upcoming ISBI challenge) can be found on our challenge website.

Benchmark1bataGlanceThere are two tasks in which it is possible to participate: (1) segmentation of anatomical structures (lung, liver, kidney, ...) in non-annotated whole body MR- and CT- volumes (participants can choose which of the organs to segment), and (2) the identification of anatomical landmarks in this data. To ensure that algorithms that for instance are only able to segment organs, but not able to localize them in a large volume, the organisers will provide additional initialization information, if participants desire.

We also encourage participation of groups developing algorithms for subsets of the organs or landmarks and submit result for individual and self-defined subtasks, e.g. just “lung and kidney” segmentations or just “landmark structures in CT”.

A detailed description of the data and of the annotated organs and landmarks  is available in the document: Data set for first competition.  More information on the tasks and their evaluation is in the document: Definition of the evaluation protocol and goals.

Benchmark organisation in the cloud

The data will be stored on the Microsoft Azure Cloud, and when participants register, they will receive a computing instance in the Microsoft Azure cloud (Windows or Linux), provided and paid for by VISCERAL, with the support of Microsoft Research. The benchmark runs in two phases (compare with "Important dates", above):

  • Training Phase. Org diagram 1The participants each have their own virtual machine (VM) in the cloud. They are provided with links to the annotated training dataset accessible through the VMs. Participants should develop their software (executables) for carrying out the benchmark tasks by closely following the specifications made available by the organisers. For the submission, participants should place their executables in their VM. Any proprietary source code shall (can) be removed by the participant before the submission. The training data will be in the same structure as the test data, which will not be accessible by the participants directly, but will only be exposed to their executable once the virtual machines are taken over by the organizers. On top of the 60 training image volumes, an additional 20 columes will be made available after the ISBI challenge.  For details, please see our challenge website.
  • Evaluation Phase.Org diagram 2 By the benchmark submission deadline, the participants should hand over the VMs to the organizers by pressing the "Submit VM" button in their participant dashboard.  The ogranizers will then run the participant executables in their given VMs against the test data and will evaluate the results generated by such submitted software.  The evaluation results will then be communicated to the participants, while the submitted programs will also be used to construct our "silver corpus" in the future.

Mailing List

To keep up to date with the latest news on VISCERAL, subscribe to the VISCERAL Mailing List.

Ask questions and make comments on the LinkedIn Group.

Documents and Resources

The latest version of the EvaluateSegmentation tool can always be downloaded from: https://github.com/codalab/EvaluateSegmentation

Data format for this benchmark VISCERAL Anatomy2 is similar to the data format definition for our earlier benchmark VISCERAL Anatomy1.

The annotation software that is used to create the ground truth is described in this document

To ensure an effective use of the manual annotattion effort, an active annotation framework is used.  This is described in this document: Prototype of gold corpus active annotation framework.

The webpage of the earlier benchmark from 2013 on the same data set with some related information.

Organizers

Allan Hanbury, Vienna University of Technology, Austria  (VISCERAL Coordinator)

Henning Müller, Hesso University of Applied Sciences Western Switzerland, Switzerland

Georg Langs, Medical University of Vienna, Austria

Orçun Göksel, ETH Zürich, Switzerland

Marc-André Weber, University of Heidelberg, Germany

Tomàs Salas Fernandez, Catalan Agency for Health Information, Assessment and Quality, Spain