Mousa Moradi

Ph.D. Candidate

Menu

Soft attention-based U-NET for automatic segmentation of OCT kidney images


Journal article


Mousa Moradi, Xian Du, Yu Chen
BiOS, 2022

Semantic Scholar DOI
Cite

Cite

APA   Click to copy
Moradi, M., Du, X., & Chen, Y. (2022). Soft attention-based U-NET for automatic segmentation of OCT kidney images. BiOS.


Chicago/Turabian   Click to copy
Moradi, Mousa, Xian Du, and Yu Chen. “Soft Attention-Based U-NET for Automatic Segmentation of OCT Kidney Images.” BiOS (2022).


MLA   Click to copy
Moradi, Mousa, et al. “Soft Attention-Based U-NET for Automatic Segmentation of OCT Kidney Images.” BiOS, 2022.


BibTeX   Click to copy

@article{mousa2022a,
  title = {Soft attention-based U-NET for automatic segmentation of OCT kidney images},
  year = {2022},
  journal = {BiOS},
  author = {Moradi, Mousa and Du, Xian and Chen, Yu}
}

Abstract

Deep learning-based models have been extensively used in computer vision and image analysis to automatically segment the region of interest (ROI) in an image. Optical coherence tomography (OCT) is used to obtain the images of the kidney’s proximal convoluted tubules (PCTs), which can be used to quantify the morphometric parameters such as tubular density and diameter. However, the large image dataset and patient movement during the scan made the pattern recognition and deep learning task to be difficult. Another challenge is a large number of non-ROIs compared to ROI pixels which caused data imbalanced and low network performance. This paper aims at developing a soft Attention-based UNET model for automatic segmentation of tubule lumen kidney images. Attention-UNET can extract features based on the ground truth structure and hence the irrelevant feature maps are not contributed during training. The performance of the soft-Attention-UNET is compared with standard UNET, Residual UNET (Res-UNET), and fully convolutional neural network (FCN). The original dataset contains 14403 OCT images from 169 transplant kidneys for training and testing. The results have shown that soft-Attention-UNET can achieve the dice score of 0.78±0.08 and intersection over union (IOU) of 0.83 which was as accurate as the manual segmentation results (dice score = 0.835±0.05) and the best segmentation scores among Res-UNET, regular UNET, and FCN networks. The results show that CLAHE contrast enhancement can improve the segmentation metrics of all models significantly (p<0.05). Experimental results of this paper have proven that the soft Attention-based UNET is highly powerful for tubule lumen identification and localization and can improve clinical decision-making on a new transplant kidney as fast and accurately as possible.


Share



Follow this website


You need to create an Owlstown account to follow this website.


Sign up

Already an Owlstown member?

Log in