AI-Generated Patient Interactions for Medical Training

Patient communication is the cornerstone of good medicine, yet the skills needed to navigate diverse patient personalities, from the reticent to the anxious, take time and practice to develop. How can physicians-in-training build confidence when the patient in front of them doesn't fit the mold? A team partnering UCSF with top engineering institutions in Korea proposes that simulated patients may help with this aspect of physician education.

Jae Ho Sohn Portrait wearing a white lab coat and navy tie, standing against a light gray background.Jae Ho Sohn, MD, MS, an assistant professor in UCSF’s Department of Radiology and Biomedical Imaging, is the lead medical collaborator on PatientSim, a machine learning project. Powered by the open source Large Language Model (LLM) named Llama 3.3 70B, PatientSim generates highly realistic, interactive, virtual patients that allow physicians to train and test their skills across a wide range of scenarios. Sohn, and his collaborators from the Korea Advanced Institute of Science & Technology (KAIST) and Ewha University will present this project at the Conference on Neural Information Processing Systems (NeurIPS) in San Diego on December 3, 2025.

Using clinical profiles drawn from real-world data in the MIMIC-ED and MIMIC-IV datasets, PatientSim generates complex medical scenarios and presents them in natural language, creating a distinct persona for each virtual patient. These personas are defined along four key axes: personality, language proficiency, medical history recall, and cognitive confusion level.This level of customization allows PatientSim to create 37 unique combinations of patient presentations, far exceeding the limited scope of standard curriculum examples. 

Sohn observes, “PatientSim lets medical students rehearse the hardest parts of medicine before they ever step into the room with a real patient. Just like pilots train in flight simulators, medical students can now practice navigating a spectrum of patient emotions, communication styles, and medical complexities. That means more confident clinicians and ultimately safer, more empathetic patient care.”

The potential for PatientSim extends beyond the teaching clinic. “As healthcare systems adopt more AI tools for triage, navigation, and patient engagement, we need AI that realistically simulates how patients speak, worry, and ask questions, Sohn notes. “PatientSim gives us a safe, privacy-preserving way to generate millions of realistic interactions so that not only the doctors in training but also future AI systems can be trained well.”
Engineers are currently struggling to develop the next generation of "AI doctors" because of a critical shortage of high-quality, real-world patient behavior data. Patient encounters are very difficult to measure, and privacy concerns are ever-present. PatientSIM can one day enable the creation of millions of artificial patients, to serve as the training and testing environment needed to advance future medical dialogue systems.

Computer scientists Daeun Kyung (KAIST), Edward Choi (KAIST), and Soo Kim (Ewha University), led PatientSim development and Sohn provided clinical radiology and machine learning expertise. 

The code is available at https://github.com/dek924/PatientSim.