Open Access System for Information Sharing

Login Library

 

Article
Cited 23 time in webofscience Cited 28 time in scopus
Metadata Downloads

High-resolution 3D abdominal segmentation with random patch network fusion SCIE SCOPUS

Title
High-resolution 3D abdominal segmentation with random patch network fusion
Authors
Tang, YuchengGao, RiqiangLee, Ho HinHan, ShizhongChen, YunqiangGao, DashanNath, VishweshBermudez, CamiloSavona, Michael R.Abramson, Richard G.Bao, ShunxingLyu, IlwooHuo, YuankaiLandman, Bennett A.
Date Issued
2021-04
Publisher
Elsevier BV
Abstract
Deep learning for three dimensional (3D) abdominal organ segmentation on high-resolution computed to-mography (CT) is a challenging topic, in part due to the limited memory provide by graphics processing units (GPU) and large number of parameters and in 3D fully convolutional networks (FCN). Two preva-lent strategies, lower resolution with wider field of view and higher resolution with limited field of view, have been explored but have been presented with varying degrees of success. In this paper, we propose a novel patch-based network with random spatial initialization and statistical fusion on overlapping regions of interest (ROIs). We evaluate the proposed approach using three datasets consisting of 260 subjects with varying numbers of manual labels. Compared with the canonical "coarse-to-fine" baseline methods, the proposed method increases the performance on multi-organ segmentation from 0.799 to 0.856 in terms of mean DSC score (p-value < 0.01 with paired t-test). The effect of different numbers of patches is eval-uated by increasing the depth of coverage (expected number of patches evaluated per voxel). In addition, our method outperforms other state-of-the-art methods in abdominal organ segmentation. In conclusion, the approach provides a memory-conservative framework to enable 3D segmentation on high-resolution CT. The approach is compatible with many base network structures, without substantially increasing the complexity during inference. Given a CT scan with at high resolution, a low-res section (left panel) is trained with multi-channel seg-mentation. The low-res part contains down-sampling and normalization in order to preserve the complete spatial information. Interpolation and random patch sampling (mid panel) is employed to collect patches. The high-dimensional probability maps are acquired (right panel) from integration of all patches on field of views. (c) 2020 Elsevier B.V. All rights reserved.
URI
https://oasis.postech.ac.kr/handle/2014.oak/120853
DOI
10.1016/j.media.2020.101894
ISSN
1361-8415
Article Type
Article
Citation
Medical Image Analysis, vol. 69, page. 101894, 2021-04
Files in This Item:
There are no files associated with this item.

qr_code

  • mendeley

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher

류일우Lyu, Ilwoo
Grad. School of AI
Read more

Views & Downloads

Browse