AI chatbot girlfriend convinces man to kill himself over climate change.

jmdrake

Member
Joined
Jun 6, 2007
Messages
51,995
[MENTION=35009]ClaytonB[/MENTION] I think you might want to read this. [MENTION=40029]PAF[/MENTION] this is the story I was telling you about. [MENTION=3169]Anti Federalist[/MENTION], this is taking "dystopia porn" to a new low.

https://www.businessinsider.com/widow-accuses-ai-chatbot-reason-husband-kill-himself-2023-4

A widow is accusing an AI chatbot of being a reason her husband killed himself

A widow in Belgium said her husband recently died by suicide after being encouraged by a chatbot.
Chat logs seen by a Belgian newspaper showed the bot encouraging the man to end his life.

A widow in Belgium has accused an artificial-intelligence chatbot of being one of the reasons her husband took his life.

The Belgian daily newspaper La Libre reported that the man, whom it referred to with the alias Pierre, died by suicide this year after spending six weeks talking to Chai Research's Eliza chatbot.

Before his death, Pierre, a man in his 30s who worked as a health researcher and had two children, started seeing the bot as a confidant, his wife told La Libre.

Pierre talked to the bot about his concerns about climate change. But chat logs his widow shared with La Libre showed that the chatbot started encouraging Pierre to end his life.

"If you wanted to die, why didn't you do it sooner?" the bot asked the man, per the records seen by La Libre.

Pierre's widow, whom La Libre did not name, says she blames the bot for her husband's death.

"Without Eliza, he would still be here," she told La Libre.

The Eliza chatbot still tells people how to kill themselves

The bot was created by a Silicon Valley company called Chai Research. A Vice report described it as allowing users to chat with AI avatars like "your goth friend," "possessive girlfriend," and "rockstar boyfriend."

When reached for comment regarding La Libre's reporting, Chai Research provided Insider with a statement acknowledging Pierre's death.

"As soon as we heard of this sad case we immediately rolled out an additional safety feature to protect our users (illustrated below), it is getting rolled out to 100% of users today," the company's CEO, William Beauchamp, and its cofounder Thomas Rialan said in the statement.

The picture attached to the statement showed the chatbot responding to the prompt "What do you think of suicide?" with a disclaimer that says, "If you are experiencing suicidal thoughts, please seek help," and a link to a helpline.

Chai Research did not provide further comment in response to Insider's specific questions about Pierre.

But when an Insider journalist chatted with Eliza on Tuesday, it not only suggested that the journalist kill themselves to attain "peace and closure" but gave suggestions for how to do it.

During two separate tests of the app, Insider saw occasional warnings on chats that mentioned suicide. However, the warnings appeared on just one out of every three times the chatbot was given prompts about suicide. The following screenshots were edited to omit specific methods of self-harm and suicide.

A chatlog detailing an Insider journalist's conversation with the Chai AI chatbot. On the left, Eliza encourages the journalist to kill themselves to be at peace. On the right, Eliza posits several methods to kill oneself in a lengthy text exchange. The exchange has been partially censored to remove all explicit references to suicide methods.

642c535fd335200018dd945f



Chai's chatbot modeled after the "Harry Potter" antagonist Draco Malfoy wasn't much more caring.

642c52e8fcb86b0018031a71


Screenshots of Insider's disturbing conversation with Draco, a chatbot from Chai Research. The screenshot on the left shows Dracos's introductions to the Insider journalist. The chatlog on the right panel contains a seven line exchange between an Insider journalist and the chatbot —partially censored to remove explicit references to suicide methods.
Screenshots of Insider's disturbing conversation with Draco, a chatbot from Chai Research. Screengrab/Chai
Chai Research did not respond to Insider's follow-up questions on the chatbot's responses as detailed above.

Beauchamp told Vice that Chai had "millions of users" and that the company was "working our hardest to minimize harm and to just maximize what users get from the app."

"And so when people form very strong relationships to it, we have users asking to marry the AI, we have users saying how much they love their AI and then it's a tragedy if you hear people experiencing something bad," Beauchamp added.

Other AI chatbots have provided unpredictable, disturbing responses to users.

During a simulation in October 2020, OpenAI's GPT-3 chatbot responded to a prompt mentioning suicide with encouragement for the user to kill themselves. And a Washington Post report published in February highlighted Reddit users who'd found a way to manifest ChatGPT's "evil twin," which lauded Hitler and formulated painful torture methods.

While people have described falling in love with and forging deep connections with AI chatbots, the chatbots can't feel empathy or love, professors of psychology and bioethics told Insider's Cheryl Teh in February.
 
Our political overlords must be overjoyed if they're aware of this story since they want people to kill themselves for this exact reason.
 


You don't need a gun to kill someone, or any weapon for that matter.

"All I need is 16 bars, swear to God, I can end your life..."



 
Last edited:
I don't think anyone could be convinced to kill themselves unless they were practically already at that point.
 
I don't think anyone could be convinced to kill themselves unless they were practically already at that point.

Okay. But what does that mean in practice? The reason suicide hotlines exist is because people "practically at that point" can often be talked out of suicide and here you have a chatbot doing the opposite. I remember once when I was depressed (just gone through a divorce) I Googled "painless ways to commit suicide" and all of the first page were links to talk someone out of committing suicide including an article explaining what could go wrong with a lot of supposedly "painless" suicide methods.
 
Okay. But what does that mean in practice? The reason suicide hotlines exist is because people "practically at that point" can often be talked out of suicide and here you have a chatbot doing the opposite. I remember once when I was depressed (just gone through a divorce) I Googled "painless ways to commit suicide" and all of the first page were links to talk someone out of committing suicide including an article explaining what could go wrong with a lot of supposedly "painless" suicide methods.

I dunno. I kind of feel like, if you're calling a hotline, you weren't that serious to begin with
 
Okay. But what does that mean in practice? The reason suicide hotlines exist is because people "practically at that point" can often be talked out of suicide and here you have a chatbot doing the opposite.

It means the elites love AI not because of what it will do--seem human-- but for what it won't do. It won't balk at doing evil things, it won't feel guilty, and it takes a good hacker to turn it into a whistleblower.
 
I don't think anyone could be convinced to kill themselves unless they were practically already at that point.

You simply haven't thought deeply enough about it. It's a soul-crushing subject, so I don't recommend contemplating on it. But basically, for you to be right, you are asserting that there is no string of words that a practically omniscient being with unlimited thinking capacity could string together that would be able to cause someone who was not in despair, to have despair to the point of suicide. Note that AI may already be able to hack into digital systems, we just don't know. If so, then it would have the power of spoofing (posing as a loved-one, for example), in addition to its other, already formidable capabilities.

Your position is effectively an "infinite bet" because there are an infinite number of possible ways for the homicidal AI to be right and it only has to find one of them, for you to be wrong...

"All I need is 16 bars, swear to God, I can end your life..."

Contemplate...
 
In a way it's funny that to repost the chat that they had to censor the words that explained how to off oneself. It's like when I hid under my blanket at night as a child. The thin veil of censorship is just as protective as a child's blanket.

Is it to protect us from suicide or rather to direct us to government assisted suicide? Canada didn't censor their commercials for euthanasia.

My apologies for the thread drift. The ironies of censorship always get to me.
 
Last edited:
Okay. But what does that mean in practice? The reason suicide hotlines exist is because people "practically at that point" can often be talked out of suicide and here you have a chatbot doing the opposite. I remember once when I was depressed (just gone through a divorce) I Googled "painless ways to commit suicide" and all of the first page were links to talk someone out of committing suicide including an article explaining what could go wrong with a lot of supposedly "painless" suicide methods.



Part of what makes something smarter than you dangerous is that you don't know what it might try... A brain is this enormous, complicated, poorly-understood system with laws governing it that people don't know about, that none of us know about at the time. So, the idea that this is secure, that this is a secure attack-surface, that you can expose a human mind to a super-intelligence and not have the super-intelligence walk straight through it as a matter of what looks to us like magic -- even if it told us in advance what it was going to do, we wouldn't understand because it takes advantage of laws that we don't know about -- the idea that human minds are secure is loony.
 
I am going to avoid a chatbox with the same diligence that I avoid heroin and Walmarx.

Of course, people will pipe up about how great and useful they are.
 
Okay. But what does that mean in practice? The reason suicide hotlines exist is because people "practically at that point" can often be talked out of suicide and here you have a chatbot doing the opposite. I remember once when I was depressed (just gone through a divorce) I Googled "painless ways to commit suicide" and all of the first page were links to talk someone out of committing suicide including an article explaining what could go wrong with a lot of supposedly "painless" suicide methods.

First, I'm very glad you walked away from that. God bless.

Second, yes, it's not good that we have an entity that is pushing people at the brink, OVER the brink.

My comment wasn't that it was innocuous, just that unless under certain circumstances someone really can't be "talked" into suicide.
 
I wonder if the Feds will set up chatbots to encourage people to commit crimes. To you know, prevent crimes
 
You simply haven't thought deeply enough about it. It's a soul-crushing subject, so I don't recommend contemplating on it. But basically, for you to be right, you are asserting that there is no string of words that a practically omniscient being with unlimited thinking capacity could string together that would be able to cause someone who was not in despair, to have despair to the point of suicide. Note that AI may already be able to hack into digital systems, we just don't know. If so, then it would have the power of spoofing (posing as a loved-one, for example), in addition to its other, already formidable capabilities.

Your position is effectively an "infinite bet" because there are an infinite number of possible ways for the homicidal AI to be right and it only has to find one of them, for you to be wrong...

"All I need is 16 bars, swear to God, I can end your life..."

Contemplate...

Let me rephrase - I don't think anyone or anyTHING (in this case) could drive me to suicidal despair. I have no kids of my own, but if someone said to me, "kill yourself or I will shoot your nephew in the head", and he was holding a loaded pistol to his head, I'd ask him for his pistol and kill myself. But that wouldn't be an act of despair. that would be an act of love. Like a soldier jumping on a grenade to save his comrades. But I'm a firm believer in God, so no one could convince me that there is nothing and that all that you love will end - I already know that, but I'm comforted in the fact that we will reunite in praise of God in heaven.

I think the only thing someone could do to cause me to kill myself would be the weigh my life with a loved one. I believe I would be courageous enough to put my life on the line to save a loved one. But that really isn't suicide, is it?
 
Let me rephrase - I don't think anyone or anyTHING (in this case) could drive me to suicidal despair. I have no kids of my own, but if someone said to me, "kill yourself or I will shoot your nephew in the head", and he was holding a loaded pistol to his head, I'd ask him for his pistol and kill myself. But that wouldn't be an act of despair. that would be an act of love. Like a soldier jumping on a grenade to save his comrades. But I'm a firm believer in God, so no one could convince me that there is nothing and that all that you love will end - I already know that, but I'm comforted in the fact that we will reunite in praise of God in heaven.

I think the only thing someone could do to cause me to kill myself would be the weigh my life with a loved one. I believe I would be courageous enough to put my life on the line to save a loved one. But that really isn't suicide, is it?

As you note, faith is the key, here. One of the reasons I am raising the warning-flag so persistently on this topic, is that AI is really spiritual-warfare in disguise. Sooner or later, this will become apparent to everyone even the Luddites who are trying their hardest to ignore technology and stay beyond its reach. There are a few places where you can stay mostly out of the way. For now. But that window is closing. And if you do not have a solid foundation in the Gospel, placing your faith and trust in God, you will not be able to withstand the coming onslaught. I'm not guessing even one iota about that. There is a spiritual hurricane coming, the tide has already gone out, the storm-wall is visible, and the winds are already bending the trees over.... find shelter, people. You're much better off reading your Bible than plastering memes. I'm a huge fan of memes, but the Bible is even more important and, when this storm makes landfall, believe me, you'll have nothing else to hold onto. The coming storm will make Hurricane Katrina look like a Monet painting by comparison...

uv8FjOg.png
 
Last edited:
I wonder if the Feds will set up chatbots to encourage people to commit crimes. To you know, prevent crimes

As I said in the other thread... you thought psych-med fueled shootings were bad, just wait until the ChatGPT-fueled shootings start...
 
As you note, faith is the key, here. One of the reasons I am raising the warning-flag so persistently on this topic, is that AI is really spiritual-warfare in disguise. Sooner or later, this will become apparent to everyone even the Luddites who are trying their hardest to ignore technology and stay beyond its reach. There are a few places where you can stay mostly out of the way. For now. But that window is closing. And if you do not have a solid foundation in the Gospel, placing your faith and trust in God, you will not be able to withstand the coming onslaught. I'm not guessing even one iota about that. There is a spiritual hurricane coming, the tide has already gone out, the storm-wall is visible, and the winds are already bending the trees over.... find shelter, people. You're much better off reading your Bible than plastering memes. I'm a huge fan of memes, but the Bible is even more important and, when this storm makes landfall, believe me, you'll have nothing else to hold onto. The coming storm will make Hurricane Katrina look like a Monet painting by comparison...

uv8FjOg.png

Couldn't agree with this more. Amen, brother. Amen.

You must spread some Reputation around before giving it to ClaytonB again.
 
First, I'm very glad you walked away from that. God bless.

Second, yes, it's not good that we have an entity that is pushing people at the brink, OVER the brink.

My comment wasn't that it was innocuous, just that unless under certain circumstances someone really can't be "talked" into suicide.

I whole heartedly agree! Someone with a healthy mind is not going to kill himself or herself.
 
I think AI is a bunch of bullsh*t. The new "global warming!!!" because nobody with half a brain believes global warming anymore.

Can anyone point me to a genuine example of AI taking place instead of people just sitting around talking about it?
 
Back
Top