Artificial intelligence (AI)-powered, robot-assisted, and ultrasound (US)-guided interventional radiology aims to enhance the efficacy and cost-efficiency of procedures while improving outcomes and reducing the burden on medical personnel. To address the lack of clinical data for training AI models, a novel method for generating synthetic ultrasound data from real preoperative 3D imaging data has been proposed. This synthetic data was used to train a deep learning-based detection algorithm for localizing needle tips and target anatomy in US images. Validation on real in vitro US data showed the models generalized well to new data, demonstrating the approach’s potential for AI-based needle and target detection in minimally invasive procedures. The method also enables accurate robot positioning based on 2D US images through a one-time calibration of US and robot coordinates. This approach could bridge the gap between simulation and real applications, facilitating the development of advanced AI algorithms for US-guided interventions.