scorecardresearch
Thursday, May 9, 2024
Support Our Journalism
HomeWorldBelgian man takes life after 6-week conversations with chatbot about climate issues

Belgian man takes life after 6-week conversations with chatbot about climate issues

‘Eliza’ founder EleutherAI said that any person, who expresses suicidal thoughts to its chatbot, now receives message directing her to suicide prevention services.

Follow Us :
Text Size:

New Delhi: The potential of artificial intelligence (AI) may continue to boggle the human mind, but experts are increasingly flagging its proverbial deep state.

A chatbot can even compel a man to commit suicide, according to the wife of a Belgian man, who recently took his life.

A report by Belgian newspaper La Libre says the father of two regularly had conversations with “Eliza” — a chatbot created by a US start-up using GPT-J technology, the open-source alternative to OpenAI’s GPT-3.

“Without these conversations with the chatbot, my husband would still be here. I am convinced of that,” his wife told La Libre.

The man, who killed himself a few weeks ago, was getting increasingly anxious about the world’s climate issues that led up to his death after dwelling into conversations for six weeks.

Around two years ago, he started showing symptoms of anxiety and depression because of this, his wife said. “Eliza answered all his questions. The chatbot had become his confidante. It was like a drug he used to withdraw in the morning and at night that he couldn’t live without,” his wife told the Belgian daily. 

The chatbot systematically followed the anxious man’s reasoning, according to La Libre – which has seen the conversation, and allegedly pushed him deeper into his distress.

“If you reread their conversations, you see that at one point the relationship seems to be taking a different turn,” the woman stated, adding, “He proposes the idea of sacrificing himself if Eliza agrees to take care of the planet and save humanity through artificial intelligence.”

It is also to be noted that the chatbot did not try to dissuade him – a man in his late 30s — from acting on his suicidal thoughts, the newspaper reported.

Belgium’s Secretary of State for Digitalisation, Mathieu Michel, told the Belgian newspaper that the incident was “a serious precedent and should be taken as such”. OpenAI has admitted that ChatGPT can produce harmful and biased answers, but it hopes to mitigate the problem by collecting user feedback.

Meanwhile, the Silicon Valley-based founder of the chatbot, Eleuther AI, told La Libre that the team was “working to improve the safety of the AI”, adding that people who expressed suicidal thoughts to the chatbot now receive a message directing them to suicide prevention services.

Earlier this week, EU police force Europol expressed its ethical and legal concerns over advanced AI like ChatGPT, warning about the potential misuse of the system in phishing attempts, disinformation, and cybercrime.

“Of course, we still have to learn to live with algorithms, but the use of any technology can in no way allow content publishers to avoid their own responsibilities,” Michel said in his statement to La Libre.


Also read: Elon Musk and others urge AI pause, citing ‘risks to society’


 

Subscribe to our channels on YouTube, Telegram & WhatsApp

Support Our Journalism

India needs fair, non-hyphenated and questioning journalism, packed with on-ground reporting. ThePrint – with exceptional reporters, columnists and editors – is doing just that.

Sustaining this needs support from wonderful readers like you.

Whether you live in India or overseas, you can take a paid subscription by clicking here.

Support Our Journalism

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular