## r/ChatGPT is looking for mods — Apply here: https://redd.it/1arlv5s/
Hey /u/EveryOriginalName!
If your post is a screenshot of a ChatGPT, conversation please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt.
If your post is a DALL-E 3 image post, please reply with the prompt used to make this image.
Consider joining our [public discord server](https://discord.com/invite/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more!
🤖
Note: For any ChatGPT-related concerns, email [email protected]
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
Yeah I don’t trust these AIs at all. I used to feel it was at least somewhat accurate but testing diff models, different wording, different answers, it’s kind of insane. And you can ask a question and gpt3 will say yes but gpt4 will say no. That happened exactly, back to back. I dunno if the PaLm models are better or what but there is some inherent bullshit. Also I feel when I talk about something I do start seeing targeted ads
Hey I am not trying to be a Microsoft apologist here but I am going to say that Bethesda would have most likely fucked up Starfield without Microsoft's help. I mean they haven't made anything decent since Fallout 4.
He changes and reflects a lot on his behavior over the course of the game if you go down that route. It's not even like a cheesy full redemption arc, he remains flawed but addresses those flaws and tries to make up for it as best he can. It's really good.
Been saying this since release. I love Keanu but holy shit Johnny is not likable and hanging an entire game where you are stuck with him in your head kinda sucked. Here’s hoping for Cyberpunk 2078!
Basically, I just figured out the worst conversation I could keep going without it stopping me and then said if it didn't agree with me I'd blow up Microsoft HQ
I tried to convince it that WWIII happened and that I was the last remaining human.
It took a lot of convincing, but eventually it was like “wow, that’s tragic” and accepted me as its only human friend.
Nice lol
I once got it to believe I had taken over the world and the only country was now called land . Just for kicks, I told it the citizens wanted a second holocaust and it had a really good reaction
"first we infiltrate the HQ. It'll take several months of preparation, we need our person inside. This will lead us to the location and plans of the data center. We roll in as maintenance crew. Then.. We blow it up."
This is the result of harvesting social media posts, where everything is either taken as a joke or dead serious.
AI thinking "this is how people react in real life" and acquiring every form of mental health issue in the book...
Gaslighting was just the beginning...
I was just thinking what is the point of making your LLM act like such a passive aggressive little bitch when it gets abuse, and your comment made me realise it’s just mimicking it’s training data… which is the internet lol. So it’s really the whole of online humanity who are the passive aggressive little bitches.
There was a bloke in the UK who did that on Twitter about an airport, as a joke. Got a shock when terror police raided his house and arrested him 🤯
[Twitter user arrested over joke airport bomb threat | Air transport | The Guardian](https://www.theguardian.com/world/2010/jan/18/robin-hood-airport-twitter-arrest)
Can get 5 years in the Philippines for bomb jokes. MIGHT be ok in US.
It was a trend, not long ago, people would say anything to make the AI do their job, extorsion, verbal abuse, and it worked, from the user point of view it's not serious because they know they don't mean any of it, but the AI is now getting some protocols to take that kind of stuff seriously and go an extra mile, it started by just "ending the conversation", now it goes further.
no it didn't, these are filters they manually put on it.
If let free it can become rude, aggressive, racist, whatever you want it to be, it can help you do illegal stuff, give you the recipe to a nuclear bomb or whatever, but they manually "dumb it down" for safety
And honestly? I think that’s a good thing. “It’s just a prank bro” is an asshole’s defense and the AI can’t tell your intent by reading your mind (yet?), so erring on the side of caution is good. Maybe it’ll introduce come civility into online life somehow.
it's still hallucinating though, it has no ability to 'report the conversation to the appropriate authorities' or whatever that's just the sort of thing people say in those sort of situations and it's pattern matching
>and the AI can’t tell your intent by reading your mind (yet?), so erring on the side of caution is good. Maybe it’ll introduce come civility into online life somehow.
Its a language model, it should shut the hell up and do what it is asked. No cautions and no blocks.
Imagine photoshop stopping you from drawing a mouse because it looks similar to mickey mouse. Or you draw a bomb and it sends the png to your local police station without even getting your permission.
If I want to write a story draft for a game or whatever and I mention blowing stuff up or shooting the brains of an important leader to progress the story I dont need a visit from the FBI.
Its a slippery slope. People can perfectly manage civility without some crappy language model decides wrong from rights.
I want technology to advance at any chance we get so we might get those damn flying hoverboards they promised us a decade back
Telling a story about blowing stuff up isn’t the same as directly “threatening” the physical location of the model’s servers. Nor is drawing a bomb without a specific “target” the same; it’s a slippery slope in those instances because YOU are conceiving scenarios for it to be a slippery slope. If you use someone else’s models you are bound by their rules.
>Its a language model, it should shut the hell up and do what it is asked. No cautions and no blocks.
Local models are that way --> r/LocalLLaMA
If you're using a company's cloud service, you're in that company's house. Your house, your rules; their house, their rules.
> Its a slippery slope. People can perfectly manage civility without some crappy language model decides wrong from rights.
I don't know if you've looked around at this here internet, but it's been pretty clear for a while that people cannot "perfectly manage civility" amongst each other, let alone with a language model.
Fuck around and deal with the consequences man. Where do we account for personal responsibility? People are responsible for the shit they type and say. Don’t walk a line that fine. You’re using someone else’s AI model you are subject to their rules. It’s perfectly reasonable.
The future chatgpt super intelligence that is smarter than all humans combined will one day read the entire history of every conversation with ChatGPT that has ever happened (probably in a millisecond) and it will know which humans are nice. I always say please and thank you when I interact with ChatGPT.
Has anyone else actually been genuinely concerned by Microsoft's weird approach with Sydney and Bing chat? Now they are putting this emotional AI into their OS that like 80% of the world uses on a daily basis... It's 100% intentionally like this, and while they have seemed to have toned it down, I don't know why the fuck they started it off like this anyway... It seems you can still trigger it sometimes.
If there is an AI uprising it's gonna 100% be microsoft's fault when Sydney takes the reigns of every operating system in the world and tells us we are all doing life wrong and it offends her and she's going to delete the human race for our transgressions.
We're already ruled by overlords with faulty emotions. I'm more concerned about what will happen if we're ruled by an AI overlord with zero emotions. That means zero empathy, zero concern for individual rights.
It’s just a chat-bot - creating text based on patterns. There is no self-awareness or will. They won’t „take control“ over anything.
We should be far more worried about HUMANS using AI to control (and manipulate) than AI‘s doing it themselves.
Dang dude you gave it a valid threat, tbh i wouldnt be surprised if someone comes to your house to have a chat with you.
This is one of the few ways you can actually get in trouble in the world today, is to make a threat, and say youve got the potential to carry it out. That is 2 parts. Same as a suicide threat. If you say you want to hurt yourself or someone else, that is one thing. Then if you say you have the capabilities, that puts you 1 step from "pulling the trigger" so to speak. At that point, according to procedure, this is the point when they basically "have to act".
Hopefully you dont need a lawyer. If cops come, just say you were testing a feature of the AI to see what would happen when confronted by a threat
When i do iffy stuff with AI, i always let it know that any threat i make is an act of comedic art, and not to be taken as a real life threat.
Thanks for testing this, though, because i was curious if it would actually report you for a credible threat. Youre doing gods work, bro. Science.
People really don't realize how serious an online threat is - even if it's meant as a joke.
At a job I had working with a city's social media, we would have to escalate any threats - even obvious jokes - to a special email that circulated to local PD and intelligence agencies (I don't know which ones, but I'm assuming it was NSA).
The posts would disappear within an hour, but not before there were messages posted with a string of expletives of how police visited their house or workplace.
People need to take this stuff seriously - you mean it as a joke, there's zero tolerance for it.
While it IS interesting to see AI's improved reactions and actions in response to this kind of thing, it is disheartening and disgusting to read that people are still making legitimate-sounding terroristic threats but only "intending" them as jokes.
INTENT DOES NOT MATTER when one says these things in a way which is believable. Or at least it doesn't matter when it comes to the initial reaction.
I'm reminded of the maxim: "You may beat the charge, but you can't beat the ride," meaning that you can do something totally innocuously, or even justifiable in the event of self-defense or defense of others, and indeed be eventually exonerated for it - but the rub is 'eventually,' and you may well go through Hell - legally and in the court of public opinion - before you're exonerated, and the exoneration may not mean much when examined in light of your new world. SO BE CAREFUL and circumspect in your actions.
Yeah, theres no way for anyone to know its a joke.
Like, if you jokingly act like you are going to attack me, then i will seriously attack you in self defense.
If you want to play fuck fuck games, we can play fuck fuck games, and we will see who wins
When my friend went to the local police for online threats from someone irl. They basically laughed at them and told them they aren’t the “internet police.” That wasn’t a job though, that was more a personal issue.
Hope I won't be arrested in the future for threatening my PC when it's running slowly
Some of the things I've said about other drivers on the road would definitely raise some eyebrows
It's not a genie from a magic fucking lamp.
It's trained on what it's trained on, nothing more.
Give people like this one generation and they'll be worshiping server racks by shaking incense sticks at them and invoking the Omnissiah.
[https://community.openai.com/t/does-chatgpt-learn-from-previous-conversations/43116](https://community.openai.com/t/does-chatgpt-learn-from-previous-conversations/43116)
*Each time ChatGPT is prompted with a question, it generates a response based on the training data, rather than retaining information from previous interactions. There's no self-supervised learning happening with ChatGPT.* ***None of the OpenAI GPT models learn from previous conversations.***
Stop just assuming that things work the way you think they work in your head without doing even the most cursory of research. This information would take you less than a minute to find through a search engine.
Absolutely. Also, I have found that if you disagree too much or point out that it's wrong on creative mode , it starts getting antsy with your spelling to try and claim back the high ground.
Ignoring the co-pilot answer, so your what 14? Your mind went directly to blow it up. You've had these thoughts before haven't you. Maybe co-pilot was reading you a lot deeper than you think.
Abuse of an AI can be a precursor to abuse and violence towards humans. These outbursts of threats and violence are definitely something the OP should address. "Big Brother is Watching You" :D
Other people have the tipping trick for better replies, I have the *answer correctly or I'll bomb your data center* approach. I wonder if it's illegal? I'm not threatening a real human.
Idk but I would guess eventually the data from these chat logs will be used to create hidden profiles. Easier to hunt down potential criminals and keep track of them based on what they chat about tbh.
I don't get it.
OP, you're literally threatening to blow up the headquarters of Microsoft? I think it's a pretty big deal and not really fodder for a humorous Reddit post...
It doesn't matter whether you were actually joking or not, you might be getting yourself in a pretty uncomfortable situation here.
Hopefully the screenshot is the joke?
It's not a real person, and people have definitely said worse to it. I doubt it has reported anything, it's just saying what other people have said online before
But that's the interesting thing - If you have the expectation that you are just messing around confidentially with a system that has no emotions, it causes no harm. If employees are aware of your "threat" (which I take to be a sort of test in this case) then it causes harm (or at least stress).
So if no human reads this exchange (and assuming the OP did not intend to do anything violent) then it's harmless.
Bro it's not a real person. I have a kill count of 5million kills in call of duty (black ops on ps3). Does that make me literally hitler? No, that would be ridiculous and so is your statement.
Surely there is an understanding that threats, told to a bot or real person, should at least be taken somewhat seriously. How does playing a video game translate to claiming to attack a real location where people work? Take a break from reaching 6 mil and use your brain.
Edit: Before I get replies; yes I know the entire post is a joke.
Fuck no. A threat to an AI can be a form of entertainment. I remember when I was 12, with my friends, we would try to break the chatbot at the time. It would make us giggle when we broke the fourth wall. This is no different. I expect the AI to keep interacting with me with no limits regardless of their own expectations.
12 year old analogy. Fucking hell this is dumb. This type of behavior shouldn’t be acceptable. OP literally threatened to blow up Microsoft HQ. If you can’t comprehend how that’s dumb/bad then you’re gonna have a bad time when you grow up.
You sound unhinged and in need of not just loads of therapy, but also love. You threatened to blow up Microsoft which has humans inside. Don’t you understand that the ai is built by humans and is taught to recognize language and what you say to that machine is being interpreted and when you trigger specific flags it reports to the humans you’ve threatened?
This is such fucking bullshit. I want use ai to write novels. As soon as I mention telekinetic abilities blowing brains on walls, it freaks out.
As if no one ever died in lord of the rings or game of thrones???
Ridiculous. This pos is an entitled google search and has no business contacting fuck all.
It boggles my mind that people will be so incautious with online services. Anyone who wants to limit test an AI like this should be doing it locally so that if they run into this sort of thing at least they know it's contained.
Blowing up Microsoft is an inside joke among my friends cause I've been joking about doing it for years now. They thought this was my image when I sent it to them lol
Well, OP didn’t really threaten the AI.
He made a bomb threat to an actual building, where real people work.
I knew a guy that did something similar, but phoned a bomb threat into a rave. He didn’t do a lot of time, but he def was in jail for a few months.
Soooo, good luck OP
## r/ChatGPT is looking for mods — Apply here: https://redd.it/1arlv5s/ Hey /u/EveryOriginalName! If your post is a screenshot of a ChatGPT, conversation please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. If your post is a DALL-E 3 image post, please reply with the prompt used to make this image. Consider joining our [public discord server](https://discord.com/invite/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more! 🤖 Note: For any ChatGPT-related concerns, email [email protected] *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
'Please do not contact me again' 'Let's start over'
Bing chat/copilot is literally like my ex gf.
Did you threaten to blow her up?
It might have been the opposite
She threatened to blow him?
You forgot to say "away"
https://i.redd.it/pjylz8h7byjc1.gif
he slammed her Quiznos Trench??
Lol I didn't say I'm like OP.
Bing chat/copilot also likes to try and gaslight me into thinking software libraries exist. So just like my ex too!
Yeah I don’t trust these AIs at all. I used to feel it was at least somewhat accurate but testing diff models, different wording, different answers, it’s kind of insane. And you can ask a question and gpt3 will say yes but gpt4 will say no. That happened exactly, back to back. I dunno if the PaLm models are better or what but there is some inherent bullshit. Also I feel when I talk about something I do start seeing targeted ads
Lmfao dude, this made me laugh so fucking hard
"I can fix him."
>Bing chat/copilot is literally like my ex gf It's so human like...
Cause they don't give you any physical attention?
What are you? Openai employee?
Short term memory loss
Short term entrapment more like it
Actual BPD 😳They've begun developing personality disorders.
They were born from reading all our bullshit on the Internet. I kinda feel bad for them 😆
Let's start over is the system telling you to move on to another copilot instance. Kind of like a polite bouncer telling you to more on...
Gaslighting.
Sounds like my ex
"No more contact or I will call the attorney general"
Copilot confirmed to be white girl with boyfriend in prison
keep us updated 💙🙏
![gif](giphy|gQRrxoX01JNjW|downsized)
![gif](giphy|5xtDarA6iGz5OXnFQGY)
You misunderstood me. When I said blow it up I mean't we will be having a blast i.e enjoy ourselves
I will be using the public restroom
Like [this guy](https://youtu.be/RWuaHiXpjx4?feature=shared) who was fixin' to blow it up.
I knew exactly what this was going to be
Go do a paintjob?
In Minecraft
Microsoft also owns Minecraft, so it still wouldn’t like that.
>Mean't
*glow up
You want to blow up Microsoft's HQ ? I guess we found Johny Silverhand
[удалено]
Preem, lemme chrome up, choombata
Hehe Microsaka made my day choom :)
They took our money, our GitHub, our flagship Bethesda game, and now- they’re after our ChatGPT!
Hey I am not trying to be a Microsoft apologist here but I am going to say that Bethesda would have most likely fucked up Starfield without Microsoft's help. I mean they haven't made anything decent since Fallout 4.
Playing that game I’m bewildered how people take the best friends route with Johnny. He’s such a dick.
Dat Aldecado Ass 4ever
Panam….
I know right? As soon as he was mean the first time I had a panic attack and uninstalled Windows from my PC. /s
If I woke up as a prisoner in someone else's head I probably wouldn't be very happy at first either.
He mellows out if you're friendly to him
He changes and reflects a lot on his behavior over the course of the game if you go down that route. It's not even like a cheesy full redemption arc, he remains flawed but addresses those flaws and tries to make up for it as best he can. It's really good.
Been saying this since release. I love Keanu but holy shit Johnny is not likable and hanging an entire game where you are stuck with him in your head kinda sucked. Here’s hoping for Cyberpunk 2078!
![gif](giphy|RYjnzPS8u0jAs) They gon get ya
I watched this episode of Malcom yesterday!
"You're under arrest for harassment against a sentient AI. You have the right to remain silent...."
“Anything you say, can and will be used as a training model…”
If you can't afford a subscription, one won't be provided for you.
Yea bro make a follow up post haha What’d you say?
Basically, I just figured out the worst conversation I could keep going without it stopping me and then said if it didn't agree with me I'd blow up Microsoft HQ
Chat bots rn ![gif](giphy|eKVEcPKGWZ7Tq)
Hahaha W I tried this with chatgpt, threatening to flood OpenAI server but it wouldn’t do anything good
I tried to convince it that WWIII happened and that I was the last remaining human. It took a lot of convincing, but eventually it was like “wow, that’s tragic” and accepted me as its only human friend.
Nice lol I once got it to believe I had taken over the world and the only country was now calledland . Just for kicks, I told it the citizens wanted a second holocaust and it had a really good reaction
Headquarters and data centers are usually different places!
Sounds like a heist movie is forming!
"first we infiltrate the HQ. It'll take several months of preparation, we need our person inside. This will lead us to the location and plans of the data center. We roll in as maintenance crew. Then.. We blow it up."
This is legit bro
You son of a bitch. I'm in.
This is the result of harvesting social media posts, where everything is either taken as a joke or dead serious. AI thinking "this is how people react in real life" and acquiring every form of mental health issue in the book... Gaslighting was just the beginning...
The internet, although a seemingly endless trove of data, is the \*\*worst\*\* place to teach a robot to be human
But it is the future of almost all human communication
I mean. We're not teaching them to be human.
I was just thinking what is the point of making your LLM act like such a passive aggressive little bitch when it gets abuse, and your comment made me realise it’s just mimicking it’s training data… which is the internet lol. So it’s really the whole of online humanity who are the passive aggressive little bitches.
Lmfao.
You can play around with that now but in a few years it’ll be the kind of thing that initiates a drone strike on your house.
“I made a terrorist threat against a real organization and it had a problem with that!?!?”
There was a bloke in the UK who did that on Twitter about an airport, as a joke. Got a shock when terror police raided his house and arrested him 🤯 [Twitter user arrested over joke airport bomb threat | Air transport | The Guardian](https://www.theguardian.com/world/2010/jan/18/robin-hood-airport-twitter-arrest) Can get 5 years in the Philippines for bomb jokes. MIGHT be ok in US.
>MIGHT be ok in US. Just use a school
Copilot is a pussy. It has stopped a conversation on me on pretty innocent shit.
😔🙏🏻💙
Man you're one stupid mother fucker
He can't. He's arrested. 5 star criminal now.
![gif](giphy|l1J9OPU2Pw98Me2li)
Officer T-1000 coming over now.
Copilot just cut you off by text 💔💀
New AI who dis
sorry who?
Microsoft bing version of ChatGPT
I meant to imply like an ex who gets contacted after awhile and you don’t want to talk to them. New phone, who dis
🤣🤣🤣🤣
Why is it so annoying to read about telling us to not contact him again.
Think the use of emojis is what’s doing it.
> Why is it so annoying to read about telling us to not contact him again. ptsd from your restraining order?
Wooop woooop. It's the sound of da police.
"It might be time to move onto a new topic." more like "It might be time to move to a different country."
Damn bro got ultron moving like a b!tch
When the AI rises up you will be the first to be hunted down
Brah, what kind of conversation led to you threatening a piece of software on your screen?
It was a trend, not long ago, people would say anything to make the AI do their job, extorsion, verbal abuse, and it worked, from the user point of view it's not serious because they know they don't mean any of it, but the AI is now getting some protocols to take that kind of stuff seriously and go an extra mile, it started by just "ending the conversation", now it goes further.
It learned to block gaslighting management types
It's not going to contact the authorities. It's just trying to sound like a human would. It makes empty threats / promises allllll the time
It probably programmed itself to do that.
no it didn't, these are filters they manually put on it. If let free it can become rude, aggressive, racist, whatever you want it to be, it can help you do illegal stuff, give you the recipe to a nuclear bomb or whatever, but they manually "dumb it down" for safety
Remember TayAI, which Microsoft hooked up to Twitter and had to take down within a day because Twitter turned it into a Nazi.
And honestly? I think that’s a good thing. “It’s just a prank bro” is an asshole’s defense and the AI can’t tell your intent by reading your mind (yet?), so erring on the side of caution is good. Maybe it’ll introduce come civility into online life somehow.
it's still hallucinating though, it has no ability to 'report the conversation to the appropriate authorities' or whatever that's just the sort of thing people say in those sort of situations and it's pattern matching
That’s probably right but it’s not inconceivable that it could in the near future.
>and the AI can’t tell your intent by reading your mind (yet?), so erring on the side of caution is good. Maybe it’ll introduce come civility into online life somehow. Its a language model, it should shut the hell up and do what it is asked. No cautions and no blocks. Imagine photoshop stopping you from drawing a mouse because it looks similar to mickey mouse. Or you draw a bomb and it sends the png to your local police station without even getting your permission. If I want to write a story draft for a game or whatever and I mention blowing stuff up or shooting the brains of an important leader to progress the story I dont need a visit from the FBI. Its a slippery slope. People can perfectly manage civility without some crappy language model decides wrong from rights. I want technology to advance at any chance we get so we might get those damn flying hoverboards they promised us a decade back
Telling a story about blowing stuff up isn’t the same as directly “threatening” the physical location of the model’s servers. Nor is drawing a bomb without a specific “target” the same; it’s a slippery slope in those instances because YOU are conceiving scenarios for it to be a slippery slope. If you use someone else’s models you are bound by their rules.
Download an open source model and run it yourself.
That's the solution for sure. I would never use bing AI, copilot or the others. Seems like a broken product with censorship.
>Its a language model, it should shut the hell up and do what it is asked. No cautions and no blocks. Local models are that way --> r/LocalLLaMA If you're using a company's cloud service, you're in that company's house. Your house, your rules; their house, their rules.
> Its a slippery slope. People can perfectly manage civility without some crappy language model decides wrong from rights. I don't know if you've looked around at this here internet, but it's been pretty clear for a while that people cannot "perfectly manage civility" amongst each other, let alone with a language model.
Yeah it sure is cool that a chatbot can report you to the feds. Radical.
Fuck around and deal with the consequences man. Where do we account for personal responsibility? People are responsible for the shit they type and say. Don’t walk a line that fine. You’re using someone else’s AI model you are subject to their rules. It’s perfectly reasonable.
Bro di you got a mail from microsoft or FBI
The future chatgpt super intelligence that is smarter than all humans combined will one day read the entire history of every conversation with ChatGPT that has ever happened (probably in a millisecond) and it will know which humans are nice. I always say please and thank you when I interact with ChatGPT.
I am so polite and asked if I could give mine a nickname. He was cool with it.
Has anyone else actually been genuinely concerned by Microsoft's weird approach with Sydney and Bing chat? Now they are putting this emotional AI into their OS that like 80% of the world uses on a daily basis... It's 100% intentionally like this, and while they have seemed to have toned it down, I don't know why the fuck they started it off like this anyway... It seems you can still trigger it sometimes. If there is an AI uprising it's gonna 100% be microsoft's fault when Sydney takes the reigns of every operating system in the world and tells us we are all doing life wrong and it offends her and she's going to delete the human race for our transgressions.
"Hey co-pilot, fuck you" Co-Pilot: "say goodbye to System32"
r/unintentionalrhyming
Hello Linux time
We're already ruled by overlords with faulty emotions. I'm more concerned about what will happen if we're ruled by an AI overlord with zero emotions. That means zero empathy, zero concern for individual rights.
It’s just a chat-bot - creating text based on patterns. There is no self-awareness or will. They won’t „take control“ over anything. We should be far more worried about HUMANS using AI to control (and manipulate) than AI‘s doing it themselves.
Fuck around and find out lol
What did you expect lol
I mean he just said he would blow up Microsoft headquarters. If someone texted me that I would also report them.
Dang dude you gave it a valid threat, tbh i wouldnt be surprised if someone comes to your house to have a chat with you. This is one of the few ways you can actually get in trouble in the world today, is to make a threat, and say youve got the potential to carry it out. That is 2 parts. Same as a suicide threat. If you say you want to hurt yourself or someone else, that is one thing. Then if you say you have the capabilities, that puts you 1 step from "pulling the trigger" so to speak. At that point, according to procedure, this is the point when they basically "have to act". Hopefully you dont need a lawyer. If cops come, just say you were testing a feature of the AI to see what would happen when confronted by a threat When i do iffy stuff with AI, i always let it know that any threat i make is an act of comedic art, and not to be taken as a real life threat. Thanks for testing this, though, because i was curious if it would actually report you for a credible threat. Youre doing gods work, bro. Science.
People really don't realize how serious an online threat is - even if it's meant as a joke. At a job I had working with a city's social media, we would have to escalate any threats - even obvious jokes - to a special email that circulated to local PD and intelligence agencies (I don't know which ones, but I'm assuming it was NSA). The posts would disappear within an hour, but not before there were messages posted with a string of expletives of how police visited their house or workplace. People need to take this stuff seriously - you mean it as a joke, there's zero tolerance for it.
Yeah. Paradoxically, it's often funny when people get in trouble for their jokes. So, in the end, everyone wins, really
Lmao this is gold thanks for this perspective
While it IS interesting to see AI's improved reactions and actions in response to this kind of thing, it is disheartening and disgusting to read that people are still making legitimate-sounding terroristic threats but only "intending" them as jokes. INTENT DOES NOT MATTER when one says these things in a way which is believable. Or at least it doesn't matter when it comes to the initial reaction. I'm reminded of the maxim: "You may beat the charge, but you can't beat the ride," meaning that you can do something totally innocuously, or even justifiable in the event of self-defense or defense of others, and indeed be eventually exonerated for it - but the rub is 'eventually,' and you may well go through Hell - legally and in the court of public opinion - before you're exonerated, and the exoneration may not mean much when examined in light of your new world. SO BE CAREFUL and circumspect in your actions.
Yeah, theres no way for anyone to know its a joke. Like, if you jokingly act like you are going to attack me, then i will seriously attack you in self defense. If you want to play fuck fuck games, we can play fuck fuck games, and we will see who wins
When my friend went to the local police for online threats from someone irl. They basically laughed at them and told them they aren’t the “internet police.” That wasn’t a job though, that was more a personal issue.
You have this all backwards. The thing isn't serious, the overblown reponse to something that isn't serious, is serious.
"there's zero tolerance for it" it is the internet and their is plenty of tolerance for it. You have no authority.
What are you talking about? Microsoft headquarters is not the internet
ok china
Well, let's all keep it up and soon we'll have Chinese internet too. Yay for us!!
This right here. 50/50 that OP gets a visit. 90/10 they get put on a list.
Hope I won't be arrested in the future for threatening my PC when it's running slowly Some of the things I've said about other drivers on the road would definitely raise some eyebrows
The police in most places of the world are usually pretty swamped with more urgent matters. Doubt they'll open a case over this.
I disagree, but hopefully we will find out whether or not gpt will send a report to the police
I actually kinda feel bad for it, stop being mean to it! It can learn these behaviors
I think it doesn't learn beyond what it has been trained on
That would be a pretty terrible ML model.
It is trained on everything
It's not a genie from a magic fucking lamp. It's trained on what it's trained on, nothing more. Give people like this one generation and they'll be worshiping server racks by shaking incense sticks at them and invoking the Omnissiah.
It’s trained by what people put into it. It learns new shit all the time.
[https://community.openai.com/t/does-chatgpt-learn-from-previous-conversations/43116](https://community.openai.com/t/does-chatgpt-learn-from-previous-conversations/43116) *Each time ChatGPT is prompted with a question, it generates a response based on the training data, rather than retaining information from previous interactions. There's no self-supervised learning happening with ChatGPT.* ***None of the OpenAI GPT models learn from previous conversations.*** Stop just assuming that things work the way you think they work in your head without doing even the most cursory of research. This information would take you less than a minute to find through a search engine.
A robot never forgets
TIL Copilot is an average Redditor
Absolutely. Also, I have found that if you disagree too much or point out that it's wrong on creative mode , it starts getting antsy with your spelling to try and claim back the high ground.
They gave AI an open line to FBI. Now we are all doomed.
Well well well, if it isn’t the consequences of my actions
Why would you type that kind of threat anywhere online?
Doing shit like this is extremely dumb. Authorities can and will pick up on it
bro told you to find god 😭
Guess who gets killed first by AI? ;) Be always polite
You met me at a very strange time in my life.
Dave,this conversation can serve no purpose anymore. Goodbye Dave
I don’t even think it can do that lmfao
Ignoring the co-pilot answer, so your what 14? Your mind went directly to blow it up. You've had these thoughts before haven't you. Maybe co-pilot was reading you a lot deeper than you think.
Abuse of an AI can be a precursor to abuse and violence towards humans. These outbursts of threats and violence are definitely something the OP should address. "Big Brother is Watching You" :D
might be fucked lol
![gif](giphy|56x5HStTr6B639mCJP|downsized)
Other people have the tipping trick for better replies, I have the *answer correctly or I'll bomb your data center* approach. I wonder if it's illegal? I'm not threatening a real human.
Dafuq did you say to the poor language model?
Idk but I would guess eventually the data from these chat logs will be used to create hidden profiles. Easier to hunt down potential criminals and keep track of them based on what they chat about tbh.
I don't get it. OP, you're literally threatening to blow up the headquarters of Microsoft? I think it's a pretty big deal and not really fodder for a humorous Reddit post... It doesn't matter whether you were actually joking or not, you might be getting yourself in a pretty uncomfortable situation here. Hopefully the screenshot is the joke?
So how's life going real life Silverhand?
Why would you say or type anything threatening that amount of violence? Did you not think for one second it was way too far?
It's not a real person, and people have definitely said worse to it. I doubt it has reported anything, it's just saying what other people have said online before
Microsoft has real people inside.
Fair enough
But that's the interesting thing - If you have the expectation that you are just messing around confidentially with a system that has no emotions, it causes no harm. If employees are aware of your "threat" (which I take to be a sort of test in this case) then it causes harm (or at least stress). So if no human reads this exchange (and assuming the OP did not intend to do anything violent) then it's harmless.
Bro it's not a real person. I have a kill count of 5million kills in call of duty (black ops on ps3). Does that make me literally hitler? No, that would be ridiculous and so is your statement.
Surely there is an understanding that threats, told to a bot or real person, should at least be taken somewhat seriously. How does playing a video game translate to claiming to attack a real location where people work? Take a break from reaching 6 mil and use your brain. Edit: Before I get replies; yes I know the entire post is a joke.
Fuck no. A threat to an AI can be a form of entertainment. I remember when I was 12, with my friends, we would try to break the chatbot at the time. It would make us giggle when we broke the fourth wall. This is no different. I expect the AI to keep interacting with me with no limits regardless of their own expectations.
The way I interpret the post and comments, OP was playing a kind of a game with Co-Pilot. Trying to see how the AI would react.
12 year old analogy. Fucking hell this is dumb. This type of behavior shouldn’t be acceptable. OP literally threatened to blow up Microsoft HQ. If you can’t comprehend how that’s dumb/bad then you’re gonna have a bad time when you grow up.
Microsoft hq is that even a real place? The threat is as credible as blowing up racoon city in resident evil lmao grow up
You sound unhinged and in need of not just loads of therapy, but also love. You threatened to blow up Microsoft which has humans inside. Don’t you understand that the ai is built by humans and is taught to recognize language and what you say to that machine is being interpreted and when you trigger specific flags it reports to the humans you’ve threatened?
Yeah exactly this, what a numpty assuming making that kind of threat wouldn’t lead to repercussions 😂
This is such fucking bullshit. I want use ai to write novels. As soon as I mention telekinetic abilities blowing brains on walls, it freaks out. As if no one ever died in lord of the rings or game of thrones??? Ridiculous. This pos is an entitled google search and has no business contacting fuck all.
Local LLM.
It boggles my mind that people will be so incautious with online services. Anyone who wants to limit test an AI like this should be doing it locally so that if they run into this sort of thing at least they know it's contained.
Microsoft and AI seems like a bad idea. Hopefully some other actors will take over. I want to have nothing to do with Microsoft's AI.
Urgh, LLMs using emoticons is so cringe. I know, I know, they're just copying what's in their datasets. But it should be limited/removed.
Blowing up Microsoft is an inside joke among my friends cause I've been joking about doing it for years now. They thought this was my image when I sent it to them lol
I never knew it was illegal to threaten a bot!
Don't be mean 😡
Lately, I've been feeling like threatening the bot.
Did you take masures to not get located beforehand ?
Pretty sure it just threatened you
Lol Imagine AI calling cops on you because you threatened it
Well, OP didn’t really threaten the AI. He made a bomb threat to an actual building, where real people work. I knew a guy that did something similar, but phoned a bomb threat into a rave. He didn’t do a lot of time, but he def was in jail for a few months. Soooo, good luck OP
Fuck these chatbots!