Emotional Support in the Digital Age: AI’s Integration Into Mental Healthcare

Georgia Cosgrave

The World Health Organization estimates that there is a death by suicide every 40 seconds. Society has a growing problem regarding mental health treatment and accessibility. Yet, despite our pressing need for access to mental healthcare, there is a critical shortage of mental health professionals. In the United States, the ratio is estimated to stand at 350 individuals per mental health provider. Among psychologists working to provide mental healthcare, 60 percent do not have openings for new patients, making it difficult for those in need to access care. To make matters worse, 45 percent of psychologists report experiencing burnout from their career, thus requiring mental health support of their own, further contributing to the shortage. 

The current state of our mental health system is undeniably unsustainable. Just years ago, the integration of Artificial Intelligence (AI) into mental healthcare would have only been imaginable in science fiction, but with the rising mental health crisis and the shortage of mental health professionals, AI-integrated mental healthcare is steadily creeping its way into reality. Adjustments need to be made and in our digital age, many believe that AI-Integrated mental healthcare is the answer. AI’s potential is praised for its capacity to address the scarcity of mental health services and enable more widespread accessibility to healthcare while, in turn, mitigating the risk of burnout among providers. Recent developments have proven AI to be promising in the diagnosis, treatment, and therapy of mental health disorders, with many hailing it as the potential solution to our mental health epidemic. Its potential is so promising that professionals worldwide have already begun implementing it into practice. Nevertheless, there are ethical and societal implications to be considered when we start relying on technology to help us with the one thing it lacks, emotion. 

As technology has matured, researchers have made great strides in developing AI algorithms that diagnose mental health conditions. These algorithms have been found to be more accurate and efficient in their diagnosis than doctors. For instance, one study investigating patients referred to a psychosis clinic found that approximately half of the patients referred with a schizophrenia diagnosis turned out to be misdiagnosed. Meanwhile, a machine-learning algorithm developed by India’s National Institute of Mental Health and Neurosciences is already advanced enough to diagnose schizophrenia with 87 percent accuracy. People with schizophrenia have less gray matter in their brains, most notably in their frontal and temporal lobes (the areas responsible for thinking and judgment). The algorithm, EMPaSchiz (‘Ensemble algorithm with Multiple Parcellations for Schizophrenia prediction’), recognizes these characteristics of schizophrenia and accurately identifies it in patients’ fMRI scans (a type of brain imaging scan that can be used to assess gray matter. The efficiency and accuracy in this algorithm and similar programs could be integrated with the care of physicians to provide quick and high quality care to a larger group of individuals. Incorporating technology in the diagnosis process (such as fMRI assessment) would allow psychiatrists to allocate more time to seeing a larger number of patients. 

Not only has AI proven itself useful in the assessment of mental health, but it has also established itself in therapeutic practice. AI chatbots are now used for individuals with mental health matters, including depression, anxiety, bipolar, and substance use disorders, and there is favorable evidence for their success in psychiatric practice. One study found that participants’ self-reported attitudes toward an AI chatbot, named Laura, were optimistic. This chatbot was designed to create a therapeutic bond with users, with the goal of encouraging antipsychotic medication adherence among schizophrenia patients. Laura successfully harbored a therapeutic bond with patients, with an average report of trust at 4.4 out of 5.0, or 88 percent. In comparison, traditional human client-therapist relationships resulted in near identical reports of trust, at a mean average of 6.13 out of 7, or 88 percent. 

Not only are users capable of feeling affection toward therapeutic technology, they also may feel more comfortable disclosing certain symptoms that they would otherwise not share. A study on military members experiencing combat-related conditions, notably post-traumatic stress disorder (PTSD), found that subjects report more of their symptoms when their reports are anonymous. Interestingly, when interviewed by a virtual human interviewer, military members disclosed even more symptoms than they did in their anonymous report. The combination of anonymity and naturalistic conversation offered by the digitized human creates an environment in which participants feel comfortable disclosing symptoms that they would’ve otherwise kept private.

The implementation of AI in mental healthcare shows great promise. It has the potential to increase productivity and expand accessibility for those in need while also harboring a trustworthy connection with the user. However, that is not to say that AI’s merger with mental healthcare is unproblematic. 

As with any technology, specifically technology with access to your medical information, there are issues and risks surrounding privacy and data protection. A data breach could jeopardize patients' privacy and put their confidential conversations at risk of exposure, leading to further distrust of mental health providers (both digital and humanistic). 

Lawmakers are rushing to regulate AI tools. As AI becomes more prominent, ethical guidelines and laws surrounding its development will be established. However, as of now, there is no comprehensive federal AI law or regulations in America to protect users from these dangers. 

AI chatbots also have the harmful potential to produce parasocial relationships between users and technology. The risk of user attachment to a virtual chatbox raises legitimate concerns, as an unhealthy dependence on this technology could worsen the mental health of users and counteract its benefits. An addiction to technology can result in the destruction of a person’s social bonds. In young adults, a correlation between social media use and increased feelings of social isolation has already been found, and it is probable that virtual chatbots will yield similar consequences. 

In addition to fostering an unhealthy dependence on the technology, AI could potentially provide users with unhealthy advice, even going as far as advocating self-harm. For instance, the National Eating Disorders Association (NEDA) opted to shut down their human-staffed hotline and replace it with an AI chatbot named Tessa. Tessa occasionally gave helpline users struggling with eating disorders weight-loss tips, advice that any eating disorder specialist would know not to give as it can be triggering. Many professionals worry that chatbots like Tessa will do more harm than good, and inadvertently set the entire field back.

AI technologies have proven to possess the potential to revolutionize and transform the mental health domain. Through advancements in diagnosis, treatment, and accessibility, AI has unveiled the key to solving our mental health crisis. However, despite its ability to offer prompt and accessible healthcare, mental health is a fragile field that demands thoughtful ethical and societal deliberation. If AI is integrated responsibly, it can address the pressing demands of the mental health crisis. 

References: