fb pixel

51福利

51福利

ACS Research Group Colloquium

Fri. Sep. 26 12:30 PM - Fri. Sep. 26 01:20 PM
Contact: Andrea Wiebe
Location: 3M63


ACS will be giving a joint-colloquium: 3 groups presenting at 15mins each.

1. Surgery-aware Generative AI for Intraoperative MRI Generation: Mask Guidance and Attention Mechanisms

Intraoperative MRI (iMRI) often suffers from global quality degradation due to rapid, highly accelerated acquisitions, undermining its diagnostic utility during glioma resections. While generative AI methods can transfer the high-quality appearance of preoperative MRI (pMRI) to iMRI, they may hallucinate resected tumors near cavities, creating safety-critical false positives. To jointly address global enhancement and pathology preservation, we propose SurgAware-GAN, featuring: (1) green masks that downweight losses within resection/tumor regions (λ_g=0.3) to suppress false restoration; (2) yellow masks (a 5 mm peri-tumoral ring) that upweight losses to enforce anatomical clarity and boundary definition; and (3) CBAM-augmented generators that improve global and local feature representations. Trained on the ReMIND dataset (n=114), SurgAware-GAN attains strong global SSIM and near-0% tumor hallucination inside cavities, while providing practical runtime (~8.3 s/volume), enabling near–real-time intraoperative enhancement.

BIO: I am a Mitacs Globalink Research intern currently conducting a three-month research placement at 51福利 under the supervision of Dr. Qian Liu. I am from Beijing University of Technology (BJUT), China. My research focuses on generative AI for medical imaging, with an emphasis on safe and faithful generation and enhancement of intraoperative MRI.

2. Towards Robust Alzheimer’s Disease Classification with Multimodal Fusion

Alzheimer's Disease (AD) poses a significant global burden, yet current unimodal diagnostic approaches using MRI alone miss critical complementary disease markers essential for accurate early detection. We developed a deep multimodal fusion framework that combines structural MRI with structured clinical data for enhanced AD diagnosis. Our approach employed FT-Transformer for tabular clinical variables and DeiT for brain MRI processing, integrating modalities through early concatenation and mid-fusion via modality-specific projections. Evaluation across five public AD datasets demonstrated that our mid-fusion approach consistently outperformed unimodal and early-fusion baselines, confirming that deep multimodal methods substantially enhance diagnostic accuracy. In this presentation, we will introduce our novel mid-fusion framework, discuss its performance across geographically diverse datasets, and highlight the interpretability aspects of our methodology.

BIO: Manjot Sran and Sujay Rittikar are thesis-based Master's students in Applied Computer Science at the University of Winnipeg, supervised by Dr. Sheela Ramanna. Both hold Bachelor's degrees in Computer Science and Engineering—Manjot from I.K. Gujral Punjab Technical University, India, and Sujay from Shivaji University, India. This research project was conducted under the guidance of Dr. Sheela Ramanna and Dr. Liu.

Manjot specializes in multimodal information processing, affective computing, and healthcare AI, with expertise in fusion strategies and soft-computing classification methods. Sujay focuses on language models, multilingualism, and multimodal healthcare AI, supported by the UW President's Scholarship for World Leaders. He brings industry experience from software companies in the real estate, finance, and compliance sectors. Inspired by this project, they both collaborate on innovative solutions through their startup to support dementia caregivers.