Improving Consistency and Reliability In Patient-Facing Radiotherapy Education Using an AI Chatbot
Abstract
Purpose
Patient-facing AI systems are increasingly used for education, yet their performance can vary widely depending on how users communicate. This study presents the development and evaluation of an AI-based chatbot designed to deliver consistent and reliable radiotherapy education across diverse real-world patient interactions using natural language processing.
Methods
A patient-facing chatbot was developed using the ChatGPT platform and configured for educational use through structured prompting and predefined response constraints. The system supports personalized dialogue, topic selection, and self-quizzes for learning reinforcement. Robust fallback mechanisms were implemented to manage ambiguous or unexpected inputs and maintain stable interactions across variable levels of health literacy and familiarity with radiotherapy. User feedback was collected using a 5-point Likert scale evaluating clarity, helpfulness, engagement, and overall satisfaction.
Results
The chatbot demonstrated strong performance as an educational support tool. Approximately 70% of users rated the chatbot above average for clarity, helpfulness, and appropriate reading duration. The adaptive dialogue structure sustained user engagement, while self-quizzes improved knowledge retention. Importantly, the system maintained consistent behavior across a broad range of user queries, supporting reliable information delivery in patient-facing contexts.
Conclusion
This work demonstrates that large language model–based chatbots can be configured to deliver reliable, patient-facing radiotherapy education when guided by structured design and interaction safeguards. The results highlight the potential for AI-driven educational tools to improve consistency in patient education and support scalable deployment in clinical environments.