Should Virtual Therapists Come With A Health Warning?

The Covid pandemic has had a profound impact on our mental wellbeing, with research from Durham Business School highlighting the scale of the challenge.

“It’s true that many workers encountered new demands on their time, such as needing to learn new tech like Zoom or navigating makeshift work procedures, and new financial demands as well as facing the loss of essential financial resources,” the researchers say. “However, the shift created a series of trade-offs for most people. There were different constraints on the way people allocated their time, energy and money that did not necessarily lead to negative consequences.”

With the cost of living crisis, these pressures haven’t been reduced, and research from Cambridge University highlights how this can have a knock-on effect on our careers. The authors highlight the significant gap between those who need mental health support and those who receive it, citing data from the United States revealing that fewer than half get the treatment they need.

What’s more, the researchers highlight that the availability of treatment significantly impacted the trajectory of people’s careers, with around a third of the earnings penalty nominally associated with bipolar disorder eliminated.

Virtual treatment

Research last year from the West Virginia School of Medicine showed that virtual therapy sessions conducted via video link during the pandemic were generally as effective as their in-person peers. As research in the British Medical Journal reveals, however, such an approach still requires a trained mental health professional, and in England alone there is such a shortage that around 1.5 million people are currently waiting for mental health treatment.

It’s no surprise, therefore, that in the UK, two-fifths of patients waiting for mental health support ended up turning to crisis services, with an estimated eight million people sitting on waiting lists. Others who are in less dire straits might turn to the growing number of AI-based services. Such apps have proven reasonably popular with users, especially when face-to-face alternatives are so hard to come by.

For instance, a few years ago, research from Brigham High University found that 90% of users reported feeling more motivated, more confident, and general feelings of mental and emotional health.

“Our findings show that mental and emotional health-focused apps have the ability to positively change behavior,” the authors say. “This is great news for people looking for inexpensive, easily accessible resources to help combat mental and emotional health illness and challenges.”

Accessible support

Various factors underpin this popularity, including the low cost, the ease of accessibility, and the fact that they’re available around the clock and at times that are convenient to users. Indeed, research from the University of Southern California found that virtual therapists can be especially useful in areas such as PTSD where people can feel uneasy sharing uncomfortable things with a human being.

It’s perhaps no surprise, therefore, that the most popular AI-based therapists have racked up millions of users. Indeed, clinical trials are underway to test whether such a service could become part of official mental health care.

While few professionals believe such services replicate official mental health support, given the paucity of official mental health support they may nonetheless provide good enough support to people in dire need of help.

Mixed messaging

They are not without risks, however, not least due to the variability of service provided. For instance, a study from the University of Sydney raised concerns about the way the apps are marketed. The researchers assessed 61 of the leading mental health apps on the market in the USA, UK, Canada, and Australia, and a couple of core themes emerged in the marketing material for them.

The first of these is that poor mental health is ubiquitous among the population, and the second was that mental health is something that can be easily managed (with the help of the apps of course). The researchers believe this presents a number of issues.

“Implying mental health problems are present in everyone promotes the medicalization of normal states,” the researchers say. “The apps we assessed tended to encourage frequent use and promoted personal responsibility for improvement.”

The authors believe that the messaging suggests that the normal ups and downs of everyday life are such that they require treatment from apps, even if they’re relatively minor concerns. It’s likely that using the apps for such issues will consume a lot of time with little real reward. It runs the risk of overdiagnosis. They also believe that the medical profession needs to take a greater role in the application of mobile mental health apps, especially around more serious issues.

“At the same time, people who have severe mental health issues may be helped by GPs or mental healthcare workers’ discussions around the limitations of app use and the importance of seeking additional forms of supportive health care where needed,” they say.

Conflicting results

Another review from the University of Warwick found that while some mental health chatbots were able to deliver positive results, there was also a high risk of bias and of conflicting results. Indeed, the researchers worried that they were creating an illusion of help rather than actual help.

This variation is largely a consequence of the lack of meaningful regulation of the sector, with no requirement for governmental oversight of many apps. Indeed, during the pandemic, the FDA actually slackened rules surrounding such apps.

It’s perhaps no surprise, therefore, that research from Indiana University found that app providers consistently modified their terminology after the change to appear more medically approved.

Suffice it to say, human therapists are far from perfect either, and quality does inevitably vary, but there remain considerable concerns that mental health apps are struggling to recognize when people are in dire need.

It’s beyond doubt that the mental health sector is in a dire situation, but this paucity of supply doesn’t mean that chatbots are the inevitable answer. Just because it’s so hard to receive face-to-face support doesn’t mean we should automatically accept sub-standard, but readily available, support from technology.

To date, it seems that while technology passes the test in terms of cost and accessibility, it is by no means certain that it passes the test in terms of quality of care, especially for those with the most significant problems.

Facebooktwitterredditpinterestlinkedinmail