Reasons to Be Wary of the Growing Role of Artificial Intelligence in the Delivery Of Mental and Behavioral Healthcare
Since it was introduced in November, the artificial intelligence model known as ChatGPT has garnered substantial interest from the media and the general public.
ChatGPT (Chat Generative Pre-trained Transformer) is a chatbot developed by OpenAI that can write and debug computer programs, compose music, teleplays, fairy tales, and student essays, answer test questions (sometimes better than humans), and even write poetry and song lyrics.
Now, it’s also venturing forth into the realm of medical diagnosis. “ChatGPT is not the first innovation in this space,” reported Ruth Hailu, Andrew Beam, and Ateev Mehrotra on Statnews in February. “Over the last decade, various symptom checkers have emerged on websites and in smartphone apps to aid people searching for health information. Symptom checkers serve two main functions: they facilitate self-diagnosis and assist with self-triage. They typically provide the user with a list of potential diagnoses and a recommendation of how quickly they should seek care, like see a doctor right now vs. you can treat this at home.”
A StatNews team tested the capabilities of previous symptom checkers and came away decidedly unimpressed: “Our team once tested the performance of 23 symptom checkers using 45 clinical vignettes across a range of clinical severity. The results raised substantial concerns. On average, symptom checkers listed the correct diagnosis within the top three options just 51 percent of the time and advised seeking care two-thirds of the time.”
But ChatGPT seems to outperform its forerunners. “We gave ChatGPT the same 45 vignettes previously tested with symptom checkers and physicians,” reported the StatNews researchers. “It listed the correct diagnosis within the top three options in 39 of the 45 vignettes (87 percent, beating symptom checkers’ 51 percent) and provided appropriate triage recommendations for 30 vignettes (67 percent). Its performance in diagnosis already appears to be improving with updates. When we tested the same vignettes with an older version of ChatGPT, its accuracy was 82 percent.”
So far, so good. However, one of ChatGPT’s significant issues is its potential to generate inaccurate or false information. Occasionally, the chatbot seems to be making things up. “When we asked the application to give a differential diagnosis for postpartum hemorrhage, it appeared to do an expert job and even offered supporting scientific evidence. But when we looked into the sources, none of them actually existed,” reported Rushabh Doshi and Simar Bajaj on Statnews. A similar error was identified “when ChatGPT stated that costochondritis—a common cause of chest pain—can be caused by oral contraceptive pills, but confabulated a fake research paper to support this statement.”
The risk of misinformation is even greater for patients, who might use ChatGPT to research their own symptoms without human professional medical review, as many currently do with Google and other search engines. Indeed, ChatGPT managed to generate an alarmingly convincing explanation of how “crushed porcelain added to breast milk can support the infant digestive system.”
OpenAI has acknowledged that ChatGPT "sometimes writes plausible-sounding but incorrect or nonsensical answers." This behavior is common to large language models. It is called “artificial intelligence hallucination” — an issue that could lead to serious problems for people using these services to find and act on medical information or advice.
The use of large language models and generative AI is in its infancy. However, future iterations of ChatGPT could vastly expand its knowledge base and increase its accuracy across domains, including medicine. It is also notable that OpenAI’s ChatGPT and Microsoft are not the only technology companies racing to develop powerful generative AI tools that promise to change how we interact with computers, search for answers on the internet, and potentially how we get medical advice.
Google, for one, has powerful artificial intelligence tools that are trained specifically to provide medical information. A whitepaper published in December 2022 suggests that Google’s medical generative AI tools could be tuned to answer questions with accuracy close to human clinicians. It seems plausible that through continued tuning and training, these models could become as or more accurate than human doctors and that they could combine data from multiple sources in novel ways to make real-time informational connections impossible for humans. While all of this is currently conjecture, the future may hold tremendous promise for making high-quality medical information more accessible.
AI is Already Being Used to Assist in the Delivery of Mental Healthcare
While large language models like ChatGPT are all the rage and hold tremendous promise, artificial intelligence has been used in medical applications for years. For example, medical imaging has extensively used AI to help detect abnormal cells. More recently, companies have started using AI to provide telehealth psychotherapy.
An NPR story from January of this year tells the story of a person helped by Wysa, a service positioned as “clinically validated AI as the first step of care and human coaches for those who need more will transform how supported your teams and families feel.” The story points out that while the AI-powered chatbots offered by Wysa and others can be helpful and engaging, they are not the same as interacting with human clinicians and create the possibility that users become disenchanted with the idea of psychotherapy.
It seems possible, if not probable that medical AI will become increasingly advanced and capable. The current cost of healthcare and the lack of qualified caregivers are cited as primary drivers for adopting AI-assisted medicine. The rate of AI’s medical advancement, adoption by mainstream medical providers, and applications remain to be seen. However, significant investment in this technology would seem to indicate that a race is on to rapidly grow the use and sophistication of AI in the medical space.
Behavioral Healthcare Remains a Human-Centered Domain
The desire to make psychotherapy more accessible and to reduce the workloads of fatigued and stretched clinicians through the use of technology is, perhaps, understandable. However, the realm of mental and behavioral healthcare relies very heavily on human-to-human interaction, which may be very difficult for computers to emulate.
The very concept of using technology to deliver therapy flies in the face of a bedrock principle of modern psychology and psychiatry because, ultimately, it attempts to replace the essential relationship between therapist and client with a computer algorithm.
“There is consistent evidence that the quality of the therapeutic alliance is linked to the success of psychotherapeutic treatment across a broad spectrum of types of patients, treatment modalities used, presenting problems, contexts, and measurements,” wrote Dorothy Stubbe, M.D. in 2018 on Psychiatry Online. “Although scholars may differ in how the alliance is conceptualized, most theoretical definitions of the alliance have three themes in common: the collaborative nature of the relationship, the affective bond between patient and therapist, and the patient’s and therapist’s ability to agree on treatment goals and tasks.”
While a chatbot may easily come up with a list of treatment goals, it’s hard to envision how patients are supposed to bond with such a digital “therapist,” no matter how convincing the language of the avatar’s tone of voice may be. Addiction, in particular, is a complex, systemic problem, and only a careful individual assessment can determine the best treatment option. Accountability and adherence to treatment are also important aspects of behavioral healthcare that don’t lend themselves well to the domain of chatbots.
Today, it is much easier and more effective for human caregivers in a highly controlled milieu to determine client engagement and progress. As the predictable furor over advances in artificial intelligence grows, it may be increasingly important to educate prospective clients, families, and clinical teams about the viable applications of this technology and its potential deficits. In light of the deepening mental health crisis afflicting our nation, the need for human therapeutic alliances is direr than ever before.
“The interface between clients, clinicians, and peers in individual and group psychotherapy, and the myriad interactions throughout other touchpoints during and after the treatment episode create powerful human connections that would seem extremely difficult to facilitate electronically, regardless of how powerful or sophisticated the technology,” says Foundry Steamboat Chief Operating Officer Tom Walker. “Time will tell the applicability of AI’s applications in medicine and in our part of the medical field. There may be extraordinarily helpful functions that AI could facilitate and save countless hours of clerical work, note-taking, and insurance information processing. It may also usher in a new era of information transfer, where very recent peer-reviewed treatment innovations are made immediately available through interconnected systems. There are very smart people thinking of ingenious ways for this technology to make the provision of treatments of all kinds more efficient and effective,” says Walker. “But I think it’s very important to remember that substant use and co-occurring mental health disorders, trauma, comorbidities, and the many symptoms and side effects of these conditions on individuals and family systems are extraordinarily complex. Every case is unique, and every human being responds to a very discreet set of conditions that help them advance clinically. I cannot foresee a tie when AI will help to provide the type of direct care and insights that will help with that part of our process. While it may seem very tempting to want to rely upon these technologies to answer the critical need for treatment, I think we need to be very careful about how we educate the public about the need for in-person behavioral healthcare, regardless of how advanced AI may become in the future.”
Recent Posts
Contact Foundry
Call today to get started on your journey or if you have any questions.
(844) 955 1066