Patients Are Reluctant To Trust AI Doctors

The emergence of artificial intelligence (AI) in medical treatment holds promise for enhancing diagnostic precision. However, a recent study led by researchers from the University of Arizona Health Sciences uncovered a notable inclination among participants, with approximately 52% expressing a preference for human doctors over AI for diagnosis and treatment.

In their findings, the research team observed that most patients remain unconvinced about the reliability and trustworthiness of diagnoses delivered by AI compared to their human counterparts in the medical profession. Despite the potential advancements AI can offer, the trust placed in human doctors continues to hold sway in the eyes of patients.

“While many patients appear resistant to the use of AI, accuracy of information, nudges and a listening patient experience may help increase acceptance,” the researchers explain. “To ensure that the benefits of AI are secured in clinical practice, future research on best methods of physician incorporation and patient decision making is required.”

The human touch

Participants were immersed in simulated patient scenarios and asked to indicate their preference between an AI system and a physical doctor for diagnosis and treatment, along with the circumstances surrounding their choice.

The research comprised two distinct phases. Firstly, structured interviews were conducted with real patients, examining their reactions to current and future AI technologies. In the second phase, a blinded, randomized survey was administered to 2,472 participants spanning diverse ethnic, racial, and socioeconomic backgrounds. The survey explored eight variables to gauge participant preferences.

The results revealed a near-even split among participants, with over 52% expressing a preference for human doctors, while approximately 47% opted for AI-based diagnostic methods. Notably, when participants were informed that their primary care physicians endorsed the superiority and usefulness of AI as an adjunct to diagnosis or were nudged to consider AI favorably, the acceptance of AI increased upon re-questioning. This underscores the significant role of human physicians in guiding patients’ decision-making processes.

Lack of trust

Interestingly, the severity of the disease, such as leukemia versus sleep apnea, did not significantly impact participants’ trust in AI. However, disparities emerged across racial, ethnic, and social lines. Black participants selected AI less frequently compared to their white counterparts, while Native Americans showed a greater inclination towards AI. Older participants and those who identified as politically conservative or valued religion were less likely to choose AI.

The study’s findings emphasize the importance of addressing racial, ethnic, and social disparities in informing different groups about the value and utility of AI in enhancing diagnostic processes. Sensitivity and tailored approaches are vital to effectively communicate the benefits of AI to diverse populations.

“I really feel this study has the import for national reach. It will guide many future studies and clinical translational decisions even now,” the researchers conclude. “The onus will be on physicians and others in health care to ensure that information that resides in AI systems is accurate, and to continue to maintain and enhance the accuracy of AI systems as they will play an increasing role in the future of health care.”

Facebooktwitterredditpinterestlinkedinmail