Uncertainty Quantification of a Deep Learning Reconstruction Framework for Nonstop Gated CBCT Using an Auxiliary Network
Abstract
Purpose
Deep learning-based image reconstruction has shown substantial promise for addressing the highly non-uniform and under-sampled projections encountered in nonstop gated CBCT (ngCBCT). However, due to the statistical nature of data-driven learning and inherent model inductive bias, patient-specific features that deviate from learned patterns may be attenuated or altered in deep neural network outputs, potentially leading to hallucination-induced errors. Reliable uncertainty quantification (UQ) is essential to enable safe and trustworthy deployment of deep learning-based reconstruction. We propose a computationally efficient post-hoc auxiliary network that predicts voxel-wise reconstruction error for a dual-domain convolutional neural network (DDCNN) developed for ngCBCT reconstruction.
Methods
An auxiliary network was trained to directly predict voxel-wise absolute reconstruction errors of a fully optimized DDCNN. The auxiliary network uses three input channels: FDK reconstruction from linearly interpolated projections, FDK reconstruction from the projection-domain output of DDCNN, and the final image-domain reconstruction. Rather than explicitly modeling uncertainty components, this approach is supervised to learn the primary model’s error patterns, implicitly capturing both epistemic and aleatoric uncertainty. The auxiliary network was trained post-hoc after convergence of the primary DDCNN model.
Results
The proposed auxiliary network generated voxel-wise uncertainty maps that spatially correlated with true reconstruction errors. When hallucination artifacts occurred in DDCNN, they were consistently identified as localized high-uncertainty regions, demonstrating the method’s ability to flag clinically relevant failure modes. Quantitative evaluation showed uncertainty prediction performance comparable to deep ensembles and Monte Carlo dropout, while single-pass auxiliary inference substantially lowers computational cost.
Conclusion
This study presents a computationally efficient auxiliary network-based UQ strategy for deep learning reconstruction of ngCBCT. By directly estimating voxel-wise reconstruction error, the proposed method provides quantitative confidence information suitable for clinical quality control. This framework enhances the safety, interpretability, and clinical deployability of deep learning-based ngCBCT reconstruction and can readily be applied to other reconstruction applications.