Virtual avatars inspire trust in ultrasound robots

[Researchers at the Technical University of Munich (TUM) tested the stress-reducing and trust-building potential of three different scenarios involving a virtual (technology-mediated) agent that guides patients through a procedure conducted by an autonomous robotic ultrasound system (they also tested a non-mediated, agent-free scenario). The results have interesting implications for the roles of presence in the design of user experiences in healthcare, and likely other, contexts. See the original version of the story and a second version that appears on the healthcare-in-europe.com website for more images, and read the open-access article in IEEE Transactions on Visualization and Computer Graphics for more details (I’ve added the abstract following the reference below). –Matthew]

[Image: Scenes from the experimental setup: A virtual avatar guides the patient. It can explain the examination and answer patients’ questions in any language. Meanwhile, the autonomous robot system performs the actual examination. Source: Song et al. (2025) (CC BY 4.0); editing: HiE/WB; via healthcare-in-europe.com]

Virtual avatars inspire trust in ultrasound robots

September 10, 2025

Patients have more confidence in autonomous robotic ultrasound systems when an avatar guides them through the process. This was discovered by Prof. Nassir Navab from the Technical University of Munich (TUM). The virtual agent explains what it is doing, answers questions and can speak any language. Such systems are intended especially for use in regions where there is a shortage of doctors.

A large screen, virtual reality glasses, a robotic arm with an ultrasound head and a powerful computer: that is the equipment needed by TUM researcher Tianyu Song from the Chair of Informatics Applications in Medicine for the autonomous examination of the aorta, carotid artery or forearm arteries. To help overcome possible doubts about the autonomous technical system, researchers have now created a virtual environment in which an avatar guides patients through the examination procedure. After putting on VR glasses, patients see an avatar that leads the conversation and answers questions. “This makes the whole process more human and friendly,” says Prof. Nassir Navab, the head of the research chair. “And this has been proven to reduce stress among users of autonomous systems.”

Virtual environment reduces patients’ stress levels

To find out more, the researchers compared the stress levels of 14 male and female patients of varying ages. Three of the four scenarios were more or less virtually supported. In one scenario, an avatar was used in a real environment, another in a virtual environment in which real elements are superimposed, and the third in a completely virtual environment. These were compared with an avatar-free, purely real variant. The researchers wired the test subjects with sensors for an electrocardiogram (ECG) to record heart rate variability. “The more this value drops during treatment, the higher the stress level of the person being treated,” explains Tianyu Song. The result: all three virtually supported scenarios proved significantly less stressful than the non-virtual treatment. When asked which of the three virtually supported scenarios they trusted the most and which felt best, the avatar in a real environment came out on top. “That’s why we’re now using it for demonstrations,” says Professor Navab, whose research is supported by the Bavarian Research Foundation as part of the ForNeRo research project (Research Network – Seamless and Ergonomic Integration of Robotics into Clinical Workflows).

Large language model masters accents

The main reason for the reduced stress levels of those being treated is the avatar, which usually has a female voice in the department’s demonstrations and walks the patient through the examination. It holds the ultrasound probe and guides it to the arm. It also talks to the patient. To make this possible, software converts the patient’s questions into text before a large language model finds suitable answers based on pre-formulated instructions, which are then converted back into spoken words. “An important trust-building factor is the fact that the avatar not only speaks different languages, but can even do so in regional accents,” says researcher Song. For example, the language model can speak with an Austrian accent or even German with an American accent. The avatar can also communicate non-verbally. It gestures, pauses briefly between sentences, and turns to face patients when they speak.

Publication:

Tianyu Song , Felix Pabst, Ulrich Eck, Nassir Navab; Enhancing Patient Acceptance of Robotic Ultrasound through Conversational Virtual Agent and Immersive Visualizations; IEEE Transactions on Visualization and Computer Graphics, 5-2025; https://ieeexplore.ieee.org/document/10916942

Abstract:
Robotic ultrasound systems have the potential to improve medical diagnostics, but patient acceptance remains a key challenge. To address this, we propose a novel system that combines an AI-based virtual agent, powered by a large language model (LLM), with three mixed reality visualizations aimed at enhancing patient comfort and trust. The LLM enables the virtual assistant to engage in natural, conversational dialogue with patients, answering questions in any format and offering real-time reassurance, creating a more intelligent and reliable interaction. The virtual assistant is animated as controlling the ultrasound probe, giving the impression that the robot is guided by the assistant. The first visualization employs augmented reality (AR), allowing patients to see the real world and the robot with the virtual avatar superimposed. The second visualization is an augmented virtuality (AV) environment, where the real-world body part being scanned is visible, while a 3D Gaussian Splatting reconstruction of the room, excluding the robot, forms the virtual environment. The third is a fully immersive virtual reality (VR) experience, featuring the same 3D reconstruction but entirely virtual, where the patient sees a virtual representation of their body being scanned in a robot-free environment. In this case, the virtual ultrasound probe, mirrors the movement of the probe controlled by the robot, creating a synchronized experience as it touches and moves over the patient’s virtual body. We conducted a comprehensive agent-guided robotic ultrasound study with all participants, comparing these visualizations against a standard robotic ultrasound procedure. Results showed significant improvements in patient trust, acceptance, and comfort. Based on these findings, we offer insights into designing future mixed reality visualizations and virtual agents to further enhance patient comfort and acceptance in autonomous medical procedures.

Further information and links:

Research Project ForNeRo: fornero.ed.tum.de


Comments


Leave a Reply

Your email address will not be published. Required fields are marked *

ISPR Presence News

Search ISPR Presence News:



Archives