Automated Extraction of Unstructured Post-SBRT Toxicity Data from Radiology Reports for Database Curation Using Large Language Models
Abstract
Purpose
Large, well-annotated databases are necessary for the development of personalized outcomes models. We evaluated the viability of using a Large Language Model (LLM) to extract patient-specific specific toxicity and progression outcomes from unstructured radiology reports to curate these databases.
Methods
We retrospectively extracted 160 follow-up CT and PET/CT electronic medical record notes for patients treated with lung stereotactic body radiotherapy (SBRT) at our institution from January 2017 through December 2023. Using the Llama 3.3-70-B-Instruct LLM, we engineered prompts to extract four clinical endpoints from each radiology report: locoregional progression, distant progression, radiation-related fibrosis, and radiation-related rib fractures. Progression endpoints were classified as yes, no, or maybe, while fibrosis and rib fractures were binary (yes or no). Ground truth labels were defined using two-grader consensus for the 60-note training set, used for prompt development, and a three-grader majority vote for the 100-note test set. LLM performance was evaluated using sensitivity, specificity, and accuracy.
Results
Sensitivity, specificity, and accuracy on the training set (test set) were 0.910 (0.636), 0.895 (0.756), and 0.900 (0.730) for locoregional progression, and 0.875 (0.375), 0.953 (0.957), and 0.933 (0.910) for distant progression, respectively. For radiation related fibrosis, the sensitivity, specificity, and accuracy on the training set (test set) were 0.947 (0.917), 0.951 (0.922), and 0.920 (0.950). For radiation related rib fractures, these values were 1.0 (0.875), 1.0 (0.989), and 1.0 (0.980).
Conclusion
The strong performance of our prompt-engineered LLMs to extract radiation related fibrosis and rib fractures demonstrates the viability of our method to curate patient specific toxicity data from unstructured clinical notes. The model’s decreased progression extraction performance on the test set indicates an overfitting of the prompts to the training set notes. Future work will focus on increasing the robustness and generalizability of the progression prompts to prevent this overfitting.