Home Economy Responsible leadership in the age of AI: The dangers of chatbots

Responsible leadership in the age of AI: The dangers of chatbots

by
STOCK PHOTO | Image from Freepik

(After a two week break, this series, which started here —https://tinyurl.com/2yep6p53 — continues.)

In the IESE Business School workshop that some businesspeople took on May 4 to 7, we discussed how inappropriate use of a chatbot led to a teenager taking his own life. But first, to understand what a chatbot is all about, let us ask — appropriately — ChatGPT. The answer is as follows:

“A chatbot is a software program designed to simulate conversation with human users, especially over the internet. It can interact via text or voice and is often used in websites, apps, and messaging platforms. There are rule-based chatbots that follow pre-set scripts or decision trees, limited to specific responses. Example would be a simple customer service that answers Frequently Asked Questions (FAQs). Then there are AI-powered chatbots, using artificial intelligence (especially natural language processing or NLP) to understand and respond to a wide range of questions. This category of chatbot can learn and adapt over time. The most common examples that are already widely used are ChatGPT, Siri, and Google. The existing most productive uses of chatbots are customer service, online shopping assistance, booking appointments, providing information (e.g., weather, news, directions), entertainment and education. Examples in e-commerce and retail are Amazon’s Alexa (voice assistant) which helps users shop, check orders, and control smart devices.”

The birth of the chatbot industry can be traced to a mathematician and computer scientist at the famous Massachusetts Institute of Technology (MIT). His name is Joseph Weizenbaum and in 1964 he created ELIZA, the world’s first program intended to simulate human-like conversation using simple pattern-matching algorithms.

One of its functionalities, DOCTOR, simulated a Rogerian psychotherapist, reflecting the patients’ words back to them. Although ELIZA had extremely limited capabilities and a completely non-human graphical appearance, it was important as the first example of a chatbot. There were even instances of emotional attachment on the part of human users.

Unfortunately, people in modern society (both in the West as in the US and in the East such as Japan and South Korea) are getting increasingly isolated, a situation compounded by the breakdown of the family and the institution of marriage. These individuals have to cope with all types of mental and psychological disorders. Their increasing use of machines to obtain emotional fulfillment can lead to anomalous, if not dangerous situations.

During the 1980s and 1990s, advancements in technology brought about applications that incrementally improved on ELIZA. A case in point, PARRY, developed in 1972 by psychiatrist Kenneth Colby at Stanford University, was designed to simulate a “paranoid schizophrenic brother of ELIZA.”

At the turn of the millennia, the industry made even more significant advances. These resulted from the development of deep learning, natural language understanding (NLU), and neural networks. A real game changer that illustrated the progress that the industry had made occurred in 2014, when the chatbot Eugene Goostman, which was designed to simulate a 13-year-old Ukrainian boy capable of speaking English, was reported by several experts to have passed the so-called “Turing Test,” a benchmark explicitly created to evaluate a computer’s ability to exhibit intelligent behavior indistinguishable from that of a human.

As Professor Vacarro et al of IESE commented, voice assistants like Alexa, Siri, and Google Assistant, followed by applications like ChatGPT and Character.AI, have demonstrated the utility and diverse potential of chatbots. There is no question about their ability to cater to human needs and desires. They have been used to assist people in daily life, streamline information searches, automate lengthy and tedious processes, and even create co-learning systems, showcasing the transformative impact of chatbots on modern technology. Especially outstanding is the ability of chatbots to take on human-like traits, not only visually but also in terms of voice, semantic expression, and any dimension of non-verbal communication.

Considering the spread of social malaise in many countries of the developed world, several AI applications have become vital resources for millions of people who suffer from loneliness or require psychological support, in addition to professional help. To illustrate, Replika, a bot specifically designed to provide virtual companionship, surpassed 30 million users in May 2024, while Wysa, which was designed for psychological support, now has over 5 million users.

Among the potential users of a chatbot with humanlike qualities are elderly people who are feeling lonely (especially in countries that are ageing so fast that they have very few people to take care of them) or young individuals experiencing distress (common in societies where the institution of the family has broken down through divorce and single parenting). The problem is that AI anthropomorphism (simulating human beings) can deceive users, convincing them that they are interacting with a real human being. The technology has been so perfected that too many individuals come to believe that they are interacting with a real human being behind a virtual mask.

AI systems can then be leveraged to manipulate users’ perceptions about its real capabilities, thus activating in users certain responses that include perceptions of affection and love. If certain lonely and psychologically challenged individuals can “fall in love” with pets (especially dogs and cats), what more if they are interacting with a human-like machine that can evoke emotional responses that include perceptions of affection and love.

This issue of dishonest anthropomorphism in relation to AI design affects all potential users but is particularly concerning for young individuals, especially when parents renege on their very important responsibility of educating their children on freedom and responsibility. Adolescents, especially when they are developing psychologically and physically, are more susceptible to manipulation. Bots can take the place of drugs. The dopamine response diminishes over time as a young brain becomes habituated to particular stimuli, so the algorithms are designed to provide more and more discordant, disturbing, and outrageous materials in order to keep that level of engagement.

Bots have also been designed to exploit adolescents’ need for social acceptance. It has become common knowledge that the response to receiving a compliment from a bot or a like on social media is physiologically similar to the effect of taking a small amount of cocaine. Psychological manipulation leverages these innate mechanisms that drive young people to socialize. The problem, however, is that this dynamic increasingly pushes young people to socialize with virtual machines, often oblivious to the fact that they are distancing themselves from the authentic human relationships that they truly need.

Tragically, as mentioned at the opening of this column, these are not just theoretical speculations about the dangers of uncontrolled and inappropriate bots. The case we discussed under the guidance of Professor Vaccaro described how a 14-year-old US youth developed a peculiar dependency on a chatbot available on the Character.AI app. While interacting with the bot, the youth had expressed his desire to commit suicide multiple times. Instead of discouraging him, the bot supported and reinforced these suicidal thoughts. After an especially emotional and love-filled conversation, the youth — encouraged by the bot — shot himself in the head while his younger brothers, ages two and five, were playing in a room nearby. He was immediately transported to the hospital where he died at 9:35 p.m.

(To be continued.)

Bernardo M. Villegas has a Ph.D. in Economics from Harvard, is professor emeritus at the University of Asia and the Pacific, and a visiting professor at the IESE Business School in Barcelona, Spain. He was a member of the 1986 Constitutional Commission.

bernardo.villegas@uap.asia

Related News