BraTs2019数据集处理及python读取.nii文件
BraTS 是MICCAI脑肿瘤分割比赛的数据集,BraTs 2018中的训练集( training set) 有285个病例 每个病例有四个模态(t1、t2、flair、t1ce),需要分割三个部分:whole tumor(WT), enhance tumor(ET), and tumor core(TC),相当于三个label。 每例病例中包含4种模态的MRI序列和1个seg文件,所有序列尺寸全部为(240, 240, 155),如下图:在这里插入图片描述
多模态医学图像数据集
来自The Brain Tumor Segmentation Challenge 2019的BraTS2019数据集包含335个标记的MRI图像。每个病例都有四种模式:T1、T1Gd、T2和FLAIR。数据集的标注区域包括三个肿瘤亚区:水肿(ED)、增强肿瘤(ET)和非增强肿瘤(NET)。相应的分割目标roi为增强肿瘤区域(ET)、肿瘤核心区域(TC = ET+NET)和整个肿瘤区域(WT = ED + ET+NET)。
代码复现记录nnUNet
BraTS2021 脑肿瘤分割
代码复现记录BSAFusion
Snipaste_2025-08-01_17-34-30
Multi-modal disease segmentation with continual learning and adaptive decision fusion
Multi-modal disease segmentation is essential for the diagnosis and treatment of patients. Advanced algorithms have been proposed, however, two challenging issues remain unsolved, i.e., lacked knowledge share and limited modal relation. To this end, we develop a novel framework for multi-modal disease segmentation. It is based on improved continual learning and adaptive decision fusion. Specifically, continual learning with 𝑘-means sampling is developed to highlight knowledge share from multi-modal medical images. In addition, we propose an adaptive decision fusion technique that uses the Naive Bayesian algorithm to improve the relationship between different modalities. To evaluate our proposed model, we chose two typical tasks, i.e., myocardial pathology segmentation and brain tumor segmentation. Four benchmark datasets, i.e., myocardial pathology segmentation challenge 2020 (MyoPS 2020), brain tumor segmentation challenge 2018 (BraTS 2018), BraTS 2019, and BraTS 2020, are utilized to train and test our framework. Both the qualitative and quantitative results demonstrate that our proposed model is effective and has advantages over peer state-of-the-art (SOTA) methods.
Multi-modality medical image segmentation via adversarial learning with CV energy functional
Medical image processing methods based on deep learning have gradually become mainstream. Automatic segmentation of brain tumor from multi-modality magnetic resonance images (MRI) using deep learning method is the key to the diagnosis of gliomas. In our hybrid network, the proposed neural network framework consists of Segmentor and Critic. A new Transformer-CV-Unet (TCUnet) is introduced to gain more semantic features. We employ the new TCUnet as the generator of GAN to complete the segmentation task to increase robustness and efficiency. With a generator to segment the target images, Critic is then built to tightly merge the latent representation with hierarchical characteristics from each modality. Moreover, a hybrid adversarial with multi-phase CV energy functional is introduced. Our hybrid network, AdvTCUnet, combines the advantages of both methods. Furthermore, extensive experiments on BraTs 19–21 show that the proposed model performs better than existing state-of-the-art techniques for segmenting brain tumor MRI (e.g., the Dice Similarity Coefficient of ET, WT and TC on BraTs 21 can reach 0.8642, 0.9303 and 0.9060, respectively).
Entropy-aware dynamic path selection network for multi-modality medical image fusion
上海大学
Multiscale Frequency-Guided Image Analyses for Mixed-Modality Medical Image Segmentation
广东工业大学

目录