Abstract

With an aging society, the need to automate time-consuming repetitive actions done by medical doctors to maximize their treatment ability is imminent. Automatic biomedical image segmentation algorithms are set to play a key role in the healthcare of the future. Currently performed by radiologists, the time-consuming procedure consists of assigning areas on the image to corresponding anatomical structures. Novel automatic segmentation algorithms proposed in the literature can be divided into atlas-based, methods using statistical shape knowledge and deep learning algorithms. Deep learning algorithms do not require complex preparation of the atlas or a priori knowledge about the segmented shape. However, their performance is dependent on the training dataset size and quality. Employing the U-Net convolutional neural network architecture, the authors aim to overcome the bottleneck of a small-sized dataset with artificial data augmentation, creating new training samples using flipping and elastic deformation procedures. Algorithms' further increase of efficiency was obtained by combining binary segmentation models – each model was trained to segment one anatomical structure on the image. As most of the work in the field focuses on the introducing novel neural networks' architectures to the field, the thorough description of the impact of these refinement steps sets the paper apart from the other publications in the field. The evaluation of the method utilized Dice's coefficient as a quantitative metric. The presented results show the differences between the model's coefficient values acquired on different magnetic resonance sequences used in the training process. Furthermore, data augmentation impact on segmentation accuracy is showcased, as well as segmentation examples for visual inspection. The authors discuss also the practical usefulness of the algorithm, its limitations as well as future development plans.

Keywords

U-Net, Image Segmentation, Magnetic Resonance Imaging (MRI), Data Augmentation