TY - GEN
T1 - Self-adaptive 2D-3D ensemble of fully convolutional networks for medical image segmentation
AU - Baldeon Calisto, Maria G.
AU - Lai-Yuen, Susana K.
N1 - Publisher Copyright:
© 2020 SPIE. All rights reserved.
PY - 2020
Y1 - 2020
N2 - Segmentation is a critical step in medical image analysis. Fully Convolutional Networks (FCNs) have emerged as powerful segmentation models achieving state-of-the art results in various medical image datasets. Network architectures are usually designed manually for a specific segmentation task so applying them to other medical datasets requires extensive expertise and time. Moreover, the segmentation requires handling large volumetric data that results in big and complex architectures. Recently, methods that automatically design neural networks for medical image segmentation have been presented; however, most approaches either do not fully consider volumetric information or do not optimize the size of the network. In this paper, we propose a novel self-adaptive 2D-3D ensemble of FCNs for 3D medical image segmentation that incorporates volumetric information and optimizes both the models performance and size. The model is composed of an ensemble of a 2D FCN that extracts intra-slice and long-range information, and a 3D FCN that exploits inter-slice information. The architectures of the 2D and 3D FCNs are automatically adapted to a medical image dataset using a multiobjective evolutionary based algorithm that minimizes both the expected segmentation error and number of parameters in the network. The proposed 2D-3D FCN ensemble was tested on the task of prostate segmentation on the image dataset from the PROMISE12 Grand Challenge. The resulting network is ranked in the top 10 submissions, surpassing the performance of other automatically-designed architectures while having 13.3× fewer parameters.
AB - Segmentation is a critical step in medical image analysis. Fully Convolutional Networks (FCNs) have emerged as powerful segmentation models achieving state-of-the art results in various medical image datasets. Network architectures are usually designed manually for a specific segmentation task so applying them to other medical datasets requires extensive expertise and time. Moreover, the segmentation requires handling large volumetric data that results in big and complex architectures. Recently, methods that automatically design neural networks for medical image segmentation have been presented; however, most approaches either do not fully consider volumetric information or do not optimize the size of the network. In this paper, we propose a novel self-adaptive 2D-3D ensemble of FCNs for 3D medical image segmentation that incorporates volumetric information and optimizes both the models performance and size. The model is composed of an ensemble of a 2D FCN that extracts intra-slice and long-range information, and a 3D FCN that exploits inter-slice information. The architectures of the 2D and 3D FCNs are automatically adapted to a medical image dataset using a multiobjective evolutionary based algorithm that minimizes both the expected segmentation error and number of parameters in the network. The proposed 2D-3D FCN ensemble was tested on the task of prostate segmentation on the image dataset from the PROMISE12 Grand Challenge. The resulting network is ranked in the top 10 submissions, surpassing the performance of other automatically-designed architectures while having 13.3× fewer parameters.
KW - Deep Learning
KW - Hyperparameter Optimization
KW - Medical Image Segmentation
KW - Multiobjective Optimization
KW - Neural Architecture Search
UR - http://www.scopus.com/inward/record.url?scp=85092576839&partnerID=8YFLogxK
U2 - 10.1117/12.2543810
DO - 10.1117/12.2543810
M3 - Contribución a la conferencia
AN - SCOPUS:85092576839
T3 - Progress in Biomedical Optics and Imaging - Proceedings of SPIE
BT - Medical Imaging 2020
A2 - Isgum, Ivana
A2 - Landman, Bennett A.
PB - SPIE
T2 - Medical Imaging 2020: Image Processing
Y2 - 17 February 2020 through 20 February 2020
ER -