Leveraging Patient-Specific Imaging and Contour Data to Improve Deep Learning Autocontouring Models for Adaptive Prostate Radiotherapy
Abstract
Purpose
To develop and evaluate patient-specific, deep learning-based autocontouring models that incorporate planning and prior fraction images/contours for improved contouring of the prostate, bladder, and rectum in prostate adaptive radiotherapy (ART).
Methods
The nnU-Net framework was used to develop deep-learning models. A general model was trained using planning CT (pCT) images and contours from 100 previously treated prostate radiotherapy patients. This general model was fine-tuned to create patient-specific models using pCTs and subsequent fraction cone-beam CT images from patients treated with 5-fraction ART. Ten patients were used to optimize fine-tuning length (epochs), and fifteen patients were used as an independent test set. Fine-tuning was performed cumulatively, incorporating all data from prior fractions – that is, fraction 1 models were fine-tuned on pCT data only, fraction 2 models were fine-tuned on pCT data and fraction 1 data, and so on. Autocontoured structures were compared against expert-drawn contours using the Dice Similarity Coefficient (DSC) and the 95th percentile Hausdorff Distance (95HD).
Results
Patient-specific models outperformed the general model for the prostate in fractions 2-5, yielding higher median DSC (0.929 vs. 0.911) and lower median 95HD (2.26 mm vs. 3.09 mm). Similarly, rectal contours improved with patient-specific models, showing higher mean DSC (0.933 vs. 0.899) and lower median 95HD (2.26 mm vs. 6.00 mm). Bladder segmentation performance was comparable between two models with similar median Dice scores (0.970 vs. 0.968) and identical median 95HD of 2.00 mm.
Conclusion
Beyond the first fraction, incorporating patient-specific data enhances the performance of deep learning-based autocontouring in adaptive prostate radiotherapy, particularly for the prostate and rectum.