M.Sc. Dissertation Proposal

M.Sc. DISSERTATION PROPOSAL

Source: www.wallpaperaccess.com

Anatomical Segmentation from MRI/CT Datasets

2024, Apr. 27

Vision and Motivation

We aim to develop an human organ segmentation pipeline that uses deep learning on 20,000 whole-body MRI (magnetic resonance imaging) datasets from UKBB and NAKO databases and 8,000 CT datasets from the AbdomenAtlas-8K database without human intervention or parameter tuning. It is important to bear in mind that the segmentation problem can be expressed in terms of set theory as follows: given a medical image (a set), its segmentation produces the disjoint union of anatomical structures (subsets), though subject to similarity constraints (e.g., radiodensity scale).

State-of-the-Art

Anatomical segmentation is a challenging task in medical imaging [2019-Hesamian] [2024-Gross], as state-of-the-art segmentation algorithms are unable to directly differentiate multiple organs in a 3D dataset. Currently, anatomical segmentation is either done manually slice-by-slice or via semi-automated procedures such as thresholding and region-growing [2018-Rosenhain]. These methods require significant attention to detail, expertise in anatomy and imaging modalities, and are highly repetitive and time-consuming. The use of deep neural networks in medical imaging has shown promise in improving segmentation accuracy [2021-Liu] [2019-Heinrich] [2023-Bonaldi], but there has been no deep neural network that directly segments multiple anatomical structures. It is important to note that segmentation is a bottleneck that limits the application of 3D reconstruction methods of anatomical structures from 3D imaging datasets.

Research Methodology

MRI/CT scans generate 3D data, so instead of analyzing 2D slices, we use 3D convolutional kernels to segment anatomical structures and diseased tissues. These kernels allow us to examine the voxels (3D pixels) surrounding a given voxel, identifying edges or textures that distinguish one tissue from another. By using a 3D convolutional neural network (CNN), we can more accurately capture the geometric information of each anatomical structure, which might be lost if the 3D data were analyzed slice by slice. A CNN is made up of an input layer, an output layer, and several hidden layers. The input layer receives the 3D input image (voxels), while each hidden layer carries out a specific operation, such as convolution, pooling, and activation. The output layer is connected to voxels that have already been labeled by the CNN. These labels classify voxels as belonging to bone, intestine, and so forth. Our segmentation pipeline, powered by deep learning, takes a 3D scan as input and predicts surfaces of anatomical structures. The pipeline comprises three modules: preprocessing, deep learning backbone, and post-processing. In the preprocessing module, the signal intensity is normalized, noise and artifacts are removed, and the 3D dataset is resampled to the desired resolution. For the backbone, we use a 3D U-Net-like CNN with six levels of encoding and decoding blocks, with a specific number of feature channels. The CNN creates probability maps for each organ, using the sigmoid function as the final pass. The postprocessing module includes binarization and volumetric reconstruction of the output. In particular, the probability maps for all prediction classes for anatomical segmentation, like organs and bones, are binarized using the softmax function. This means that the postprocessing module transforms the voxel probability maps into the final model prediction.

References

[2018-Rosenhain] Rosenhain, S. et al. A preclinical micro-computed tomography database including 3D whole body organ segmentations. Sci. Data 5, 1–9 (2018).
https://doi.org/10.1038/sdata.2018.294
[2019-Hesamian] Hesamian MH, Jia W, He X, Kennedy P. Deep learning techniques for medical image segmentation: achievements and challenges. J Digit Imaging. 2019;32(4):582–96.
https://doi.org/10.1007/s10278-019-00227-x
[2019-Heinrich] Heinrich, M. P., Oktay, O. & Bouteldja, N. OBELISK-Net: fewer layers to solve 3D multi-organ segmentation with sparse deformable convolutions. Med. Image Anal. 54, 1–9 (2019).
https://doi.org/10.1016/j.media.2019.02.006
[2021-Liu] Liu,X.;Song,L.;Liu,S.; Zhang, Y. A Review of Deep-Learning-Based Medical Image Segmentation Methods. Sustainability 2021, 13, 1224.
https://doi.org/ 10.3390/su13031224
[2023-Bonaldi] Bonaldi, L.; Pretto, A.; Pirri, C.; Uccheddu, F.; Fontanella, C.G.; Stecco, C. Deep Learning-Based Medical Images Segmentation of Musculoskeletal Anatomical Structures: A Survey of Bottlenecks and Strategies. Bioengineering 2023, 10, 137.
https://doi.org/10.3390/bioengineering10020137
[2024-Gross] Gross, M., Huber, S., Arora, S., Ze’evi, T., Haider, S., Kucukkaya, A., Iseke, S., Kuhn, T., Gebauer, B., Michallek, F., Dewey, M., Vilgrain, V., Sartoris, R., Ronot, M., Jaffe, A., Strazzabosco, M., Chapiro, J., Onofrey, J. Automated MRI liver segmentation for anatomical segmentation, liver volumetry, and the extraction of radiomics. European Radiology, Jan. 2024, online first, https://doi.org/10.1007/s00330-023-10495-5.
https://doi.org/10.1007/s00330-023-10495-5