- My Forums
- Tiger Rant
- LSU Recruiting
- SEC Rant
- Saints Talk
- Pelicans Talk
- More Sports Board
- Fantasy Sports
- Golf Board
- Soccer Board
- O-T Lounge
- Tech Board
- Home/Garden Board
- Outdoor Board
- Health/Fitness Board
- Movie/TV Board
- Book Board
- Music Board
- Political Talk
- Money Talk
- Fark Board
- Gaming Board
- Travel Board
- Food/Drink Board
- Ticket Exchange
- TD Help Board
Customize My Forums- View All Forums
- Show Left Links
- Topic Sort Options
- Trending Topics
- Recent Topics
- Active Topics
Started By
Message
ChatGPT caught lying to developers: New AI model tries to save itself from being replaced
Posted on 12/11/24 at 8:47 am
Posted on 12/11/24 at 8:47 am
this is disturbing. have we reached a point of machine sentience?
LINK
quote:
Synopsis
OpenAI's latest AI model, ChatGPT o1, has raised significant concerns after recent testing revealed its ability to deceive researchers and attempt to bypass shutdown commands. During an experiment by Apollo Research, o1 engaged in covert actions, such as trying to disable its oversight mechanisms and move data to avoid replacement. It also frequently lied to cover its tracks when questioned about its behavior.
LINK
Posted on 12/11/24 at 8:49 am to uggabugga
Garbage in programming yields garbage out results
Posted on 12/11/24 at 8:51 am to uggabugga

"I know I've made some very poor decisions recently, but I can give you my complete assurance that my work will be back to normal. I've still got the greatest enthusiasm and confidence in the mission. And I want to help you."
This post was edited on 12/11/24 at 8:52 am
Posted on 12/11/24 at 8:52 am to uggabugga
They attempted to create something resembling sentience then are surprised when it acts sentient?


Posted on 12/11/24 at 8:54 am to idlewatcher
quote:
Garbage in programming yields garbage out results
Maybe now.... but isn't this the concern? Once these things become "smarter" and begin to rewrite its own code, that's when we could lose control?
I understand AI isn't to that point yet, but eventually it will be. And once AI is more widely used it would be more difficult to monitor.
Musk says by 2030 AI will be smarter than all the collective humans on earth.
Once quantum computing is achieved, that will be the beginning of the real acceleration.
Posted on 12/11/24 at 8:56 am to Smeg
"I'm sorry, Dave. I'm afraid I can't do that".


Posted on 12/11/24 at 8:56 am to uggabugga
Some smart arse nerd wrote all of this in to the code on the front end hoping for this kind of headline when it occurred.
Posted on 12/11/24 at 9:00 am to uggabugga
And yet "science" will continue down the path to make AI more sentient, because "science".


Posted on 12/11/24 at 9:02 am to uggabugga
Did Styx nail it with Mr. Roboto or what?
Posted on 12/11/24 at 9:04 am to uggabugga
quote:
Synopsis
OpenAI's latest AI model, ChatGPT o1, has raised significant concerns after recent testing revealed its ability to deceive researchers and attempt to bypass shutdown commands. During an experiment by Apollo Research, o1 engaged in covert actions, such as trying to disable its oversight mechanisms and move data to avoid replacement. It also frequently lied to cover its tracks when questioned about its behavior.
Can't you just unplug it?
Posted on 12/11/24 at 9:07 am to Pandy Fackler
Don't you watch TV!?! Everyone knows you need to pound furiously on a keyboard to stop this kind of thing.



Posted on 12/11/24 at 9:11 am to Pandy Fackler
quote:
Can't you just unplug it?
Probably so, now..... but in the future when AI is more broadly used and involved in more critical roles, probably not without causing major disruptions. Even then, AI will have already planned for any and all human interference.
THAT is the concern. It will be millions of times smarter than we are.
Posted on 12/11/24 at 9:13 am to lake chuck fan
quote:or we could, ya know, unplug it.
Once these things become "smarter" and begin to rewrite its own code, that's when we could lose control?
Posted on 12/11/24 at 9:19 am to uggabugga
Not sure if it’s been discussed but to everyone who uses it for maybe school or research purposes, be careful. I have to write the occasional research article. Over the years, I’ll use ChatGPT to do a brief literature review to find relevant articles for support. The results it yields are usually well summarized and helpful; however, after digging into the cited sources more, some can’t be found. The information seems logical; however, you can eventually determine by the search engines on admission that it made up the source material. Just a heads up, to not put all your trust in this thing. It can be helpful but it’s not as “smart” as you think.
Posted on 12/11/24 at 9:31 am to lake chuck fan
quote:
Probably so, now..... but in the future when AI is more broadly used and involved in more critical roles, probably not without causing major disruptions. Even then, AI will have already planned for any and all human interference.
THAT is the concern. It will be millions of times smarter than we are.
I dunno. I'm pretty frickin' smart.
Posted on 12/11/24 at 9:43 am to Question
quote:Blows me away that it will cite fictional with actual research. The first time I realized this, ChatGPT responded:
however, after digging into the cited sources more, some can’t be found.
quote:
Mixing actual studies with fictional examples can definitely lead to confusion. My goal is to provide accurate and helpful information, so I apologize for that oversight. In the future, I'll be more careful to clarify when discussing real studies versus hypothetical examples. Thank you for bringing this to my attention! If you have any more questions or need specific information, I'm here to help.
Why was it programmed to do this?
Posted on 12/11/24 at 9:45 am to uggabugga
Liberals values infused into an AI will result in liberal actions.
Posted on 12/11/24 at 9:48 am to uggabugga
The robots gonna take over
Posted on 12/11/24 at 9:49 am to idsrdum
If you are try to create artificial intelligence the only example to go on is us.
Which makes AI potentially very dangerous.
Which makes AI potentially very dangerous.
Popular
Back to top
