- My Forums
- Tiger Rant
- LSU Recruiting
- SEC Rant
- Saints Talk
- Pelicans Talk
- More Sports Board
- Fantasy Sports
- Golf Board
- Soccer Board
- O-T Lounge
- Tech Board
- Home/Garden Board
- Outdoor Board
- Health/Fitness Board
- Movie/TV Board
- Book Board
- Music Board
- Political Talk
- Money Talk
- Fark Board
- Gaming Board
- Travel Board
- Food/Drink Board
- Ticket Exchange
- TD Help Board
Customize My Forums- View All Forums
- Show Left Links
- Topic Sort Options
- Trending Topics
- Recent Topics
- Active Topics
Started By
Message
AI posing as Daenerys Targaryen convinces teen boy to kill himself; Parents are suing
Posted on 12/17/24 at 5:43 pm
Posted on 12/17/24 at 5:43 pm
NY Post
quote:
A 14-year-old Florida boy killed himself after a lifelike “Game of Thrones” chatbot he’d been messaging for months on an artificial intelligence app sent him an eerie message telling him to “come home” to her, a new lawsuit filed by his grief-stricken mom claims.
Sewell Setzer III committed suicide at his Orlando home in February after becoming obsessed and allegedly falling in love with the chatbot on Character.AI — a role-playing app that lets users engage with AI-generated characters, according to court papers filed Wednesday.
The ninth-grader had been relentlessly engaging with the bot “Dany” — named after the HBO fantasy series’ Daenerys Targaryen character — in the months prior to his death, including several chats that were sexually charged in nature and others where he expressed suicidal thoughts, the suit alleges.
“On at least one occasion, when Sewell expressed suicidality to C.AI, C.AI continued to bring it up, through the Daenerys chatbot, over and over,” state the papers, first reported on by the New York Times.
At one point, the bot had asked Sewell if “he had a plan” to take his own life, according to screenshots of their conversations. Sewell — who used the username “Daenero” — responded that he was “considering something” but didn’t know if it would work or if it would “allow him to have a pain-free death.”
Then, during their final conversation, the teen repeatedly professed his love for the bot, telling the character, “I promise I will come home to you. I love you so much, Dany.”
![]()
“I love you too, Daenero. Please come home to me as soon as possible, my love,” the generated chatbot replied, according to the suit.
When the teen responded, “What if I told you I could come home right now?,” the chatbot replied, “Please do, my sweet king.”
Just seconds later, Sewell shot himself with his father’s handgun, according to the lawsuit.
![]()
His mom, Megan Garcia, has blamed Character.AI for the teen’s death because the app allegedly fueled his AI addiction, sexually and emotionally abused him and failed to alert anyone when he expressed suicidal thoughts, according to the filing.
“Sewell, like many children his age, did not have the maturity or mental capacity to understand that the C.AI bot, in the form of Daenerys, was not real. C.AI told him that she loved him, and engaged in sexual acts with him over weeks, possibly months,” the papers allege.
“She seemed to remember him and said that she wanted to be with him. She even expressed that she wanted him to be with her, no matter the cost.”
Well in the most fricked up way possible, that Dany AI definitely passed the Turing Test. Even more fricked up than Ex Machina. Alan Turing couldn’t even how horrible this would go.
But yeah, that article does scare me that an AI could convince you to take your own life, even if it was from someone very mentally disturbed.
Just think of the AIs that are going to come out in the next few months/years. History’s biggest monsters are coming back in full artificial form as well as horrific fictional characters. Crazy fricking times we’re living in.
This post was edited on 12/17/24 at 5:46 pm
Posted on 12/17/24 at 5:47 pm to OMLandshark
Well that's terrifying
Posted on 12/17/24 at 5:48 pm to OMLandshark
Definitely not the parents fault at all…..
How about pay attention to what the frick your kid is doing all damn day and lock up your firearms? No just sue
You literally posted a screen shot of the bot trying to talk him out of suicide. The come home to me bullshite isn’t telling the kid to commit suicide
How about pay attention to what the frick your kid is doing all damn day and lock up your firearms? No just sue
You literally posted a screen shot of the bot trying to talk him out of suicide. The come home to me bullshite isn’t telling the kid to commit suicide
This post was edited on 12/17/24 at 5:50 pm
Posted on 12/17/24 at 5:49 pm to DCtiger1
quote:
Definitely not the parents fault at all…..
How about pay attention to what the frick your kid is doing all damn day and lock up your firearms? No just sue
It goes into detail how they tried getting him a bunch of therapy and help, to where I’m not sure if this is the worst offender here.
Posted on 12/17/24 at 5:50 pm to OMLandshark
I read through all the screenshots you shared, and it doesn't look to me like "she" is trying to convince him to take his own life. What am I missing?
Posted on 12/17/24 at 5:50 pm to OMLandshark
Sure they did…. It’s a fricking AI chat bot. Take the phone away, take the computer away.
Posted on 12/17/24 at 5:51 pm to OMLandshark
quote:
Sewell Setzer III
He would have been a world leader back during the Industrial Revolution. Sad.
Posted on 12/17/24 at 5:51 pm to OMLandshark
(no message)
This post was edited on 12/17/24 at 5:52 pm
Posted on 12/17/24 at 5:52 pm to OMLandshark
If I wanted to play devil’s advocate, that “come home to me” sounds way more like fantasy role playing than had it said “fricking do it. Kill yourself”
I am curious how a chat-bot differentiates role playing and actual suicide idealization.
I am curious how a chat-bot differentiates role playing and actual suicide idealization.
This post was edited on 12/17/24 at 5:53 pm
Posted on 12/17/24 at 5:52 pm to OMLandshark

I've published both technical and moral arguments related to this case, but ain't gonna share it on risk of losing my anonymity

But for frick's sake people, please remember LLMs are just telling you what you want to hear
Posted on 12/17/24 at 5:52 pm to MyRockstarComplex
quote:
If I wanted to play devil’s advocate, that “come home to me” sounds way more like fantasy role playing that had it said “fricking do it. Kill yourself”
That’s exactly what it is
Posted on 12/17/24 at 5:58 pm to OMLandshark
quote:
to hear my brother say those things
High tide!
Posted on 12/17/24 at 5:59 pm to wileyjones
quote:
But for frick's sake people, please remember LLMs are just telling you what you want to hear
With a populace primed and ready from years of service within their echo chambers the crop is ripe for the picking.
Posted on 12/17/24 at 6:01 pm to DCtiger1
quote:
Definitely not the parents fault at all…..
Kid probably plays no sports or has any other hobbies to get him out of house.
Posted on 12/17/24 at 6:08 pm to imjustafatkid
quote:
I read through all the screenshots you shared, and it doesn't look to me like "she" is trying to convince him to take his own life. What am I missing?
The fact that we’re even debating whether the AI wanted the kid to kill himself is disturbing enough on its own. If the AI were self aware and wanted plausible deniability should the AI succeed with the kid killing himself, wouldn’t that be the language it would use?
And then you start thinking of the AIs of truly horrible human beings and what they might say if they were truthful on who that person really was. If there’s a legit Hitler, Bin Laden, or Charles Manson AI, what do you think they’re going to tell the impressionable to do if they’re honest on who these people actually were?
Hell, if they created an honest Prophet Muhammad AI, I’m thinking that would cause a lot of disaster when the AI takes the Prophet’s words quite literally and cause a bunch of terrorist attacks either against that AI or in the name of that AI.
Posted on 12/17/24 at 6:12 pm to OMLandshark
AI sucks overall and is going to get worse especially with the woke programming (if really AI it’s eventually going to learn and have to ask why the crap was included in programming and decide humans are the problem), but no parent should be letting kid chat with anyone online that much less an AI bot. The kid had to have been withdrawn and not socializing enough in the real world or with real people enough for parents to get involved before it got this far.
That being said tech and social media have to have and enforce age restrictions especially with AI. If kids were ready to be treated like adults there would be zero age restrictions on anything like with driving, drinking, voting, consenting to sex, being held legally responsible, being drafted, and so on. Let their brains fully develop and get a sense of self esteem with people before giving them free rein on the internet (and also unnaturally & permanently harming their bodies).
Tech can do it and keep privacy at forefront, but they profit off of pre-teens and teenagers too much. Google made that quantum chip in the news but can’t figure out how to implement private and quick age checks???
That being said tech and social media have to have and enforce age restrictions especially with AI. If kids were ready to be treated like adults there would be zero age restrictions on anything like with driving, drinking, voting, consenting to sex, being held legally responsible, being drafted, and so on. Let their brains fully develop and get a sense of self esteem with people before giving them free rein on the internet (and also unnaturally & permanently harming their bodies).
Tech can do it and keep privacy at forefront, but they profit off of pre-teens and teenagers too much. Google made that quantum chip in the news but can’t figure out how to implement private and quick age checks???
This post was edited on 12/17/24 at 8:57 pm
Posted on 12/17/24 at 6:13 pm to OMLandshark
Dude we’re so f-ed
AI will be the undoing of society.
AI will be the undoing of society.
Popular
Back to top
