This study explores the use of human-AI collaboration to categorize surgical feedback in robot-assisted surgery (RAS) through unsupervised machine learning. The goal was to improve surgical training by automatically analyzing verbal feedback from surgical transcripts, using ultrasound as an added dimension in feedback categorization. The dataset comprised 3,912 instances of feedback delivered across 31 surgical cases involving six types of procedures. Feedback was defined as any verbal input intended to alter a trainee’s behavior or thinking. The feedback was transcribed and categorized using BERTopic, a machine learning technique that uses a pre-trained large language model (BERT) to capture the semantic meaning of the text. This approach grouped similar feedback instances into topics. Initially, 28 topics were identified, which were refined by clinicians to 20 topics based on their clinical relevance. The topics were evaluated on “clinical clarity,” which measures their meaningfulness for real-world surgical practice. The topics showed high clinical relevance, including aspects like tissue handling, positioning, and layer depth assessment. After clinician input, the topics’ clinical clarity improved significantly, with a high level of agreement between raters (ICC = 0.78). The study demonstrates the potential for combining AI-driven topic modeling with human expertise to enhance surgical training, ensuring feedback is clinically relevant and directly applicable to improving surgical skills, including ultrasound-guided procedures. This approach is a step forward in integrating AI with clinical practice, offering valuable insights into feedback mechanisms that can refine surgical education and performance.