TY - GEN
T1 - Impact of Transfer Learning on Transformers Networks for Prostate Image Segmentation
AU - Casanova, Xavier
AU - Baldeon-Calisto, Maria
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - The segmentation of the prostate in magnetic resonance images (MRI) plays a crucial role in detection and treatment planning of prostate cancer and other types of disease. Recently, vision transformers (ViT) have gained great success in automating the segmentation of the different zones of the prostate. Specifically, ViTs excel in capturing long-range dependencies within an image through their self-attention mechanisms. However, ViTs demand large training datasets for effective performance, posing a challenge in medical applications where acquiring such datasets is costly and time-consuming. Transfer learning offers a solution by pre-training a ViT on natural image dataset and fine-tuning it for the specific segmentation task at hand. In this work, we statistically analyze how transfer learning from natural image datasets impacts the performance of ViTs in prostate MRI segmentation. We evaluate three ViT architectures, both with and without transfer learning, using the prostate dataset from the Medical Segmentation Decathlon database, with the Dice coefficient as evaluation metric. Through a paired t-test, our analysis reveals that applying transfer learning from natural images does not improve the segmentation performance of the peripheral zone or transition zone of the prostate. This suggests that features learned from natural images are not readily transferable to medical imaging tasks. Moreover, our experiments also indicate that pre-training does not speed optimization convergence during training.
AB - The segmentation of the prostate in magnetic resonance images (MRI) plays a crucial role in detection and treatment planning of prostate cancer and other types of disease. Recently, vision transformers (ViT) have gained great success in automating the segmentation of the different zones of the prostate. Specifically, ViTs excel in capturing long-range dependencies within an image through their self-attention mechanisms. However, ViTs demand large training datasets for effective performance, posing a challenge in medical applications where acquiring such datasets is costly and time-consuming. Transfer learning offers a solution by pre-training a ViT on natural image dataset and fine-tuning it for the specific segmentation task at hand. In this work, we statistically analyze how transfer learning from natural image datasets impacts the performance of ViTs in prostate MRI segmentation. We evaluate three ViT architectures, both with and without transfer learning, using the prostate dataset from the Medical Segmentation Decathlon database, with the Dice coefficient as evaluation metric. Through a paired t-test, our analysis reveals that applying transfer learning from natural images does not improve the segmentation performance of the peripheral zone or transition zone of the prostate. This suggests that features learned from natural images are not readily transferable to medical imaging tasks. Moreover, our experiments also indicate that pre-training does not speed optimization convergence during training.
KW - Medical Image Segmentation
KW - Natural Image Segmentation
KW - Prostate MRI Segmentation
KW - Transfer Learning
KW - Vision Transformers
KW - ViTs
UR - http://www.scopus.com/inward/record.url?scp=105000928906&partnerID=8YFLogxK
U2 - 10.1109/ICMLA61862.2024.00121
DO - 10.1109/ICMLA61862.2024.00121
M3 - Contribución a la conferencia
AN - SCOPUS:105000928906
T3 - Proceedings - 2024 International Conference on Machine Learning and Applications, ICMLA 2024
SP - 841
EP - 844
BT - Proceedings - 2024 International Conference on Machine Learning and Applications, ICMLA 2024
A2 - Wani, M. Arif
A2 - Angelov, Plamen
A2 - Luo, Feng
A2 - Ogihara, Mitsunori
A2 - Wu, Xintao
A2 - Precup, Radu-Emil
A2 - Ramezani, Ramin
A2 - Gu, Xiaowei
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 23rd IEEE International Conference on Machine Learning and Applications, ICMLA 2024
Y2 - 18 December 2024 through 20 December 2024
ER -