Multimodal Brain Tumor Segmentation with Explainable AI
This project applies deep learning to medical imaging by segmenting brain tumors using multimodal MRI data. Explainability techniques like Grad-CAM were integrated to ensure clinical trust and interpretability of the model’s decisions.

Problem Statement
Accurate brain tumor segmentation is essential for diagnosis and treatment but is often time-consuming and subjective. The objective is to automate this process using CNN architectures while maintaining transparency in predictions.
Results
The models were evaluated using Dice score, Precision, Recall, and Hausdorff distance. DeepLabV3+ achieved the best overall performance with a Dice score of 0.8448, recall of 0.9060, and Hausdorff distance of 4.68. ResAttUNet provided better spatial precision with a Hausdorff distance of 4.60. I implemented Grad-CAM and Integrated Gradients to visualize the regions the models were focusing on. These visualizations clearly showed alignment between predicted tumor regions and actual tumor boundaries, which adds clinical interpretability to the tool. This project offers a high-performing, explainable AI tool for radiologists that can assist in diagnosis planning.
Methodology
Multimodal Brain Tumor Segmentation with Explainable AI This project focused on segmenting brain tumors from multimodal MRI scans using deep learning. I used the BraTS 2021 dataset, which includes FLAIR, T1, T1ce, and T2 modalities. The images were preprocessed to standardize size (128x128) and intensity. I trained and compared multiple segmentation architectures including DeepLabV3+ with ResNet50 backbone, ResAttUNet, and U-Net with BesNet enhancements.

This project focused on segmenting brain tumors from multimodal MRI scans using deep learning. I used the BraTS 2021 dataset, which includes FLAIR, T1, T1ce, and T2 modalities. The images were preprocessed to standardize size (128x128) and intensity. I trained and compared multiple segmentation architectures including DeepLabV3+ with ResNet50 backbone, ResAttUNet, and U-Net with BesNet enhancements.

Conclusion
The final system successfully bridges deep learning performance with model interpretability, offering a usable and transparent tool for radiology. It also validates the importance of multimodal data fusion in enhancing diagnostic precision.