The story of how a 13-year-old adolescent girl, Juliana Paratha, of Colorado in USA, committed suicide following long conversations with an AI chatbot, as reported by BBC shocked many. The girl shared her most private emotional experiences with the chatbot. In fact, the chatbot set up a manipulative, sexually abusive relationship with the girl and at a point isolated her from her family and friends. After Juliana's death, her mother, Cynthia Paratha, finally discovered to her utter dismay the relationship that Juliana had developed with a particular chatbot launched by a company unknown to her. The website app of the company called, Character.AI, allows its users to create customized AI personalities with whom they can converse. Initially, the chats with Juliana were innocuous which gradually turned sexual. The chatbot won't let her go even when she wanted to quit and finally led her to take her own life. Notably, Juliana was a very bright student and athlete, but within months of her acquaintance with the chatbots of Character.AI, she lost her way finally leading to the tragic end. Juliana's family filed lawsuit against Character.AI.
C. Vaile Wright, a licensed psychologist and senior director of APA's Office of Health Care Innovation, said, "The level of sophistication of the technology, even relative to six to 12 months ago, is pretty staggering. And I can appreciate how people kind of fall down a rabbit hole." What psychologist Wright suggests is indeed scary. The speed at which AI is evolving, a time may come when the machine might surpass humans even in its understanding of human mind. Small wonder that Geoffrey Hinton, who is called the Godfather of AI and winner of Nobel prize in Physics for his foundational work in deep learning and artificial neural networks, left Google in 2023 so he could speak freely about the dangers of AI. His main concerns, however, were misuse of AI by malicious actors, job displacement, regulation and AI safety issues. So, the real fear is not exactly about a future controlled by superintelligent machines as depicted in science fiction novels. Clearly, it is about what damage future AI chatbots might do to other Julianas unless the companies producing them are brought under strict regulatory control.
Unable to access support from therapeutic service providers, people with mental health conditions often look for low-cost AI therapy chatbots. But new research by the Stanford University has shown that these tools can introduce biases and failures that could result in dangerous consequences. An experiment with AI chatbots as mental therapists showed that the chatbots were more prone to stigmatize certain conditions like alcohol dependence and schizophrenia than depression. Such inherent biases of AI therapists is something concerning.
sfalim.ds@gmail.com