Seemless Clinical Implementation of Autocontours Zeroclick
Abstract
Purpose
To validate whether switching from one AI autocontouring software for organ at risk (OAR) segmentation to another is possible seamlessly in a clinical radiation therapy environment by comparing 1) the number of OARs available 2) the quality of OARs 3) the computation time and 4) the useability.
Methods
A set of 29 CT image sets were anonymized and autocontoured with two software from Radformation (Radformation, New York, USA) : 1) Limbus AI v1.8 and 2) AutoContours ZeroClick v.2.6 Beta. DICE scores were calculated to assess the overlap of each contour, and the time taken to contour was noted. Workflow was compared between the two and any errors or bugs were noted in a document.
Results
A total of 620 OARs were contoured, of which 88% had a DICE score >= 0.9 and 9.0% between 0.7 and 0.9. Of note, only 2.4% had a DICE score of 1.0 even though models used to generate the contours were deemed to be identical in both software. There was also 1.5% of contours that had a DICE score of 0. Processing time was significantly faster in all but one case for ZeroClick. The number of OARs available in ZeroClick is larger. As for useability, ZeroClick retain all the features of LimbusAI but provides more flexibility as it’s a web-based app and comes with options of on-premises or cloub-based computing.
Conclusion
For most OARs, the contours from both software were very similar indicating the transition from LimbusAI to ZeroClick should be seamless for clinical use. Computation times were overall faster for ZeroClick. Nevertheless, a few outliers were detected, and careful examination of all clinically relevant OAR models is suggested when switching to a different autocontouring software