The Rules Needed To Incorporate AI Into Therapy

robot therapistsI’ve written a few times about the growing capabilities of artificial intelligence to support mental health, especially in the detection of various conditions through voice analysis and so on.  The domain, whilst interesting, is also one where people expose their deepest vulnerabilities however, so getting the ethics right is going to be crucial.

This was the conclusion of a recent study from a team at the Technical University of Munich, who urge all stakeholders to address the matter as quickly as possible.

Of course, that’s not to say that the applications don’t have enormous potential, and the authors themselves illustrate some of the advantages, not least in widening access to those who may otherwise be limited by time or location.  Similarly, some people may find it easier to have what are very difficult conversations with a machine rather than a human.

Augmenting rather than substituting

They urge us not to view AI-based technologies as something that can provide cheap substitution to treatment from human doctors however, but rather something that augments traditional treatment.

“Although embodied AI has arrived in the clinical world, there are still very few recommendations from medical associations on how to deal with this issue. Urgent action is needed, however, if the benefits of these technologies are to be exploited while avoiding disadvantages and ensuring that reasonable checks are in place. Young doctors should also be exposed to this topic while still at medical school,” they explain.

There are a number of projects underway to underpin the ethical development of AI more generally, with the Ethics Guidelines for Trustworthy AI recently published by the EU.  These don’t tend to apply to specific fields however, and so the authors urge more dedicated attention to be given to specific use cases.

“Therapeutic AI applications are medical products for which we need appropriate approval processes and ethical guidelines,” they say. “For example, if the programs can recognize whether patients are having suicidal thoughts, then they must follow clear warning protocols, just like therapists do, in case of serious concerns.”

These kind of concerns perhaps underpin why AI-based tools are a long way from being suitable for independent use, and are only likely to be used as a decision support tool by trained therapists and clinicians for some time to come.

Facebooktwitterredditpinterestlinkedinmail