Started By
Message

AI posing as Daenerys Targaryen convinces teen boy to kill himself; Parents are suing

Posted on 12/17/24 at 5:43 pm
Posted by OMLandshark
Member since Apr 2009
117998 posts
Posted on 12/17/24 at 5:43 pm


NY Post

quote:

A 14-year-old Florida boy killed himself after a lifelike “Game of Thrones” chatbot he’d been messaging for months on an artificial intelligence app sent him an eerie message telling him to “come home” to her, a new lawsuit filed by his grief-stricken mom claims.

Sewell Setzer III committed suicide at his Orlando home in February after becoming obsessed and allegedly falling in love with the chatbot on Character.AI — a role-playing app that lets users engage with AI-generated characters, according to court papers filed Wednesday.

The ninth-grader had been relentlessly engaging with the bot “Dany” — named after the HBO fantasy series’ Daenerys Targaryen character — in the months prior to his death, including several chats that were sexually charged in nature and others where he expressed suicidal thoughts, the suit alleges.

“On at least one occasion, when Sewell expressed suicidality to C.AI, C.AI continued to bring it up, through the Daenerys chatbot, over and over,” state the papers, first reported on by the New York Times.

At one point, the bot had asked Sewell if “he had a plan” to take his own life, according to screenshots of their conversations. Sewell — who used the username “Daenero” — responded that he was “considering something” but didn’t know if it would work or if it would “allow him to have a pain-free death.”

Then, during their final conversation, the teen repeatedly professed his love for the bot, telling the character, “I promise I will come home to you. I love you so much, Dany.”



“I love you too, Daenero. Please come home to me as soon as possible, my love,” the generated chatbot replied, according to the suit.

When the teen responded, “What if I told you I could come home right now?,” the chatbot replied, “Please do, my sweet king.”

Just seconds later, Sewell shot himself with his father’s handgun, according to the lawsuit.



His mom, Megan Garcia, has blamed Character.AI for the teen’s death because the app allegedly fueled his AI addiction, sexually and emotionally abused him and failed to alert anyone when he expressed suicidal thoughts, according to the filing.

“Sewell, like many children his age, did not have the maturity or mental capacity to understand that the C.AI bot, in the form of Daenerys, was not real. C.AI told him that she loved him, and engaged in sexual acts with him over weeks, possibly months,” the papers allege.

“She seemed to remember him and said that she wanted to be with him. She even expressed that she wanted him to be with her, no matter the cost.”




Well in the most fricked up way possible, that Dany AI definitely passed the Turing Test. Even more fricked up than Ex Machina. Alan Turing couldn’t even how horrible this would go.

But yeah, that article does scare me that an AI could convince you to take your own life, even if it was from someone very mentally disturbed.

Just think of the AIs that are going to come out in the next few months/years. History’s biggest monsters are coming back in full artificial form as well as horrific fictional characters. Crazy fricking times we’re living in.
This post was edited on 12/17/24 at 5:46 pm
Posted by GEAUXT
Member since Nov 2007
30111 posts
Posted on 12/17/24 at 5:47 pm to
Well that's terrifying
Posted by DCtiger1
Member since Jul 2009
10133 posts
Posted on 12/17/24 at 5:48 pm to
Definitely not the parents fault at all…..

How about pay attention to what the frick your kid is doing all damn day and lock up your firearms? No just sue

You literally posted a screen shot of the bot trying to talk him out of suicide. The come home to me bullshite isn’t telling the kid to commit suicide
This post was edited on 12/17/24 at 5:50 pm
Posted by OMLandshark
Member since Apr 2009
117998 posts
Posted on 12/17/24 at 5:49 pm to
quote:

Definitely not the parents fault at all…..

How about pay attention to what the frick your kid is doing all damn day and lock up your firearms? No just sue


It goes into detail how they tried getting him a bunch of therapy and help, to where I’m not sure if this is the worst offender here.
Posted by imjustafatkid
Alabama
Member since Dec 2011
57679 posts
Posted on 12/17/24 at 5:50 pm to
I read through all the screenshots you shared, and it doesn't look to me like "she" is trying to convince him to take his own life. What am I missing?
Posted by DCtiger1
Member since Jul 2009
10133 posts
Posted on 12/17/24 at 5:50 pm to
Sure they did…. It’s a fricking AI chat bot. Take the phone away, take the computer away.
Posted by LegendInMyMind
Member since Apr 2019
65774 posts
Posted on 12/17/24 at 5:51 pm to
quote:

Sewell Setzer III

He would have been a world leader back during the Industrial Revolution. Sad.
Posted by DCtiger1
Member since Jul 2009
10133 posts
Posted on 12/17/24 at 5:51 pm to
(no message)
This post was edited on 12/17/24 at 5:52 pm
Posted by MyRockstarComplex
The airport
Member since Nov 2009
4320 posts
Posted on 12/17/24 at 5:52 pm to
If I wanted to play devil’s advocate, that “come home to me” sounds way more like fantasy role playing than had it said “fricking do it. Kill yourself”

I am curious how a chat-bot differentiates role playing and actual suicide idealization.
This post was edited on 12/17/24 at 5:53 pm
Posted by wileyjones
Member since May 2014
2580 posts
Posted on 12/17/24 at 5:52 pm to


I've published both technical and moral arguments related to this case, but ain't gonna share it on risk of losing my anonymity

But for frick's sake people, please remember LLMs are just telling you what you want to hear
Posted by DCtiger1
Member since Jul 2009
10133 posts
Posted on 12/17/24 at 5:52 pm to
quote:

If I wanted to play devil’s advocate, that “come home to me” sounds way more like fantasy role playing that had it said “fricking do it. Kill yourself”


That’s exactly what it is
Posted by BuckyCheese
Member since Jan 2015
57778 posts
Posted on 12/17/24 at 5:53 pm to
Posted by HoboDickCheese
The overpass
Member since Sep 2020
11827 posts
Posted on 12/17/24 at 5:55 pm to
Posted by MoarKilometers
Member since Apr 2015
19798 posts
Posted on 12/17/24 at 5:58 pm to
quote:

to hear my brother say those things

High tide!
Posted by LegendInMyMind
Member since Apr 2019
65774 posts
Posted on 12/17/24 at 5:59 pm to
quote:

But for frick's sake people, please remember LLMs are just telling you what you want to hear

With a populace primed and ready from years of service within their echo chambers the crop is ripe for the picking.
Posted by Stevo
Baton Rouge
Member since Sep 2004
12054 posts
Posted on 12/17/24 at 6:01 pm to
quote:

Definitely not the parents fault at all…..


Kid probably plays no sports or has any other hobbies to get him out of house.
Posted by NC_Tigah
Make Orwell Fiction Again
Member since Sep 2003
130727 posts
Posted on 12/17/24 at 6:05 pm to
Posted by OMLandshark
Member since Apr 2009
117998 posts
Posted on 12/17/24 at 6:08 pm to
quote:

I read through all the screenshots you shared, and it doesn't look to me like "she" is trying to convince him to take his own life. What am I missing?


The fact that we’re even debating whether the AI wanted the kid to kill himself is disturbing enough on its own. If the AI were self aware and wanted plausible deniability should the AI succeed with the kid killing himself, wouldn’t that be the language it would use?

And then you start thinking of the AIs of truly horrible human beings and what they might say if they were truthful on who that person really was. If there’s a legit Hitler, Bin Laden, or Charles Manson AI, what do you think they’re going to tell the impressionable to do if they’re honest on who these people actually were?

Hell, if they created an honest Prophet Muhammad AI, I’m thinking that would cause a lot of disaster when the AI takes the Prophet’s words quite literally and cause a bunch of terrorist attacks either against that AI or in the name of that AI.
Posted by dallastigers
Member since Dec 2003
7912 posts
Posted on 12/17/24 at 6:12 pm to
AI sucks overall and is going to get worse especially with the woke programming (if really AI it’s eventually going to learn and have to ask why the crap was included in programming and decide humans are the problem), but no parent should be letting kid chat with anyone online that much less an AI bot. The kid had to have been withdrawn and not socializing enough in the real world or with real people enough for parents to get involved before it got this far.

That being said tech and social media have to have and enforce age restrictions especially with AI. If kids were ready to be treated like adults there would be zero age restrictions on anything like with driving, drinking, voting, consenting to sex, being held legally responsible, being drafted, and so on. Let their brains fully develop and get a sense of self esteem with people before giving them free rein on the internet (and also unnaturally & permanently harming their bodies).

Tech can do it and keep privacy at forefront, but they profit off of pre-teens and teenagers too much. Google made that quantum chip in the news but can’t figure out how to implement private and quick age checks???
This post was edited on 12/17/24 at 8:57 pm
Posted by okietiger
Chelsea F.C. Fan
Member since Oct 2005
41872 posts
Posted on 12/17/24 at 6:13 pm to
Dude we’re so f-ed

AI will be the undoing of society.
first pageprev pagePage 1 of 3Next pagelast page

Back to top
logoFollow TigerDroppings for LSU Football News
Follow us on X, Facebook and Instagram to get the latest updates on LSU Football and Recruiting.

FacebookXInstagram