Denoising-Driven Estimation of Cell Microenvironment Parameters from Diffusion MRI
Abstract
Purpose
Diffusion MRI (dMRI) is a promising imaging modality for estimating tumor microenvironment parameters, providing valuable information for early treatment response assessment in radiotherapy. High noise levels in 1.5T MRI degrade parameter estimation accuracy. This study proposes a deep-learning-based denoising framework for dMRI signals to improve estimation accuracy while preserving the interpretability of parameters estimation.
Methods
100,000 dMRI signals with Rician noise levels 0.01-0.12 were computed for cell parameters (diameter , intracellular volume fraction and extracellular diffusivity ) covering relevant clinical ranges based on IMPULSED model with PGSE and OGSE sequence. Three deep-learning models (CNN, MLP, and LSTM) were constructed and trained with generated data to denoising dMRI signals. The denoised signals were subsequently used for estimation cell parameters by minimizing least square error, when fitting to the IMPULSED model. We validated our study in simulation studies and an in vitro experiment on a 1.5T MRI scanner using HeLa and MC38 cell lines.
Results
All three denoising models consistently improved dMRI signal quality and parameter fitting accuracy. The LSTM model achieved the best denoising performance, with an overall denoising Mean Absolute Error (MAE) of 1.60×10-2. Using LSTM-denoised signals, parameter estimation achieved MAEs of 4.02μm for , 11.96% for , and 0.63μm2/ms for , outperforming the other models and the non-denoised baseline, particularly at low SNRs <30. The in vitro experiments achieved estimation errors of 0.8%/6.69% for and in HeLa cells, and 1.0% / 3.41% for MC38 with 3000g centrifugation forces and 5.57%/6.05% for MC38 6000g forces.
Conclusion
The signal denoising framework significantly enhances cell microenvironment parameters estimation accuracy for 1.5T dMRI data, which holds potential for noninvasive treatment response assessment for multi-fraction radiotherapy.