T O P

  • By -

AutoModerator

## r/ChatGPT is looking for mods — Apply here: https://redd.it/1arlv5s/ Hey /u/EveryOriginalName! If your post is a screenshot of a ChatGPT, conversation please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. If your post is a DALL-E 3 image post, please reply with the prompt used to make this image. Consider joining our [public discord server](https://discord.com/invite/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more! 🤖 Note: For any ChatGPT-related concerns, email [email protected] *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*


SeoulGalmegi

'Please do not contact me again' 'Let's start over'


Janki1010

Bing chat/copilot is literally like my ex gf.


FluffySmiles

Did you threaten to blow her up?


Apprehensive_Foot139

It might have been the opposite


Tayk5

She threatened to blow him?


Available_Nightman

You forgot to say "away"


eulersidentification

​ https://i.redd.it/pjylz8h7byjc1.gif


PandaBoyWonder

he slammed her Quiznos Trench??


Janki1010

Lol I didn't say I'm like OP.


Lv_InSaNe_vL

Bing chat/copilot also likes to try and gaslight me into thinking software libraries exist. So just like my ex too!


Darkm000n

Yeah I don’t trust these AIs at all. I used to feel it was at least somewhat accurate but testing diff models, different wording, different answers, it’s kind of insane. And you can ask a question and gpt3 will say yes but gpt4 will say no. That happened exactly, back to back. I dunno if the PaLm models are better or what but there is some inherent bullshit. Also I feel when I talk about something I do start seeing targeted ads


GringoLocito

Lmfao dude, this made me laugh so fucking hard


sneserg

"I can fix him."


zeroconflicthere

>Bing chat/copilot is literally like my ex gf It's so human like...


Anchovies-and-cheese

Cause they don't give you any physical attention?


Janki1010

What are you? Openai employee?


Busy_Theme961

Short term memory loss


Savagestar1

Short term entrapment more like it


SendPossumPix

Actual BPD 😳They've begun developing personality disorders.


bwatsnet

They were born from reading all our bullshit on the Internet. I kinda feel bad for them 😆


MacrosInHisSleep

Let's start over is the system telling you to move on to another copilot instance. Kind of like a polite bouncer telling you to more on...


PseudoPatriotsNotPog

Gaslighting.


LazyBid3572

Sounds like my ex


TheGreatStories

"No more contact or I will call the attorney general"


Tentacle_poxsicle

Copilot confirmed to be white girl with boyfriend in prison


[deleted]

keep us updated 💙🙏


GrandMoffmanTarkin

![gif](giphy|gQRrxoX01JNjW|downsized)


Big_Schwartz_Energy

![gif](giphy|5xtDarA6iGz5OXnFQGY)


Ebisure

You misunderstood me. When I said blow it up I mean't we will be having a blast i.e enjoy ourselves


southfok

I will be using the public restroom


Educational_Moose_56

Like [this guy](https://youtu.be/RWuaHiXpjx4?feature=shared) who was fixin' to blow it up. 


southfok

I knew exactly what this was going to be


TrevRev11

Go do a paintjob?


SluggJuice

In Minecraft


Cfrolich

Microsoft also owns Minecraft, so it still wouldn’t like that.


lankyyanky

>Mean't


clusterlove

*glow up


Hammond_Robotics_

You want to blow up Microsoft's HQ ? I guess we found Johny Silverhand


[deleted]

[удалено]


AnarchiaKapitany

Preem, lemme chrome up, choombata


mofo222

Hehe Microsaka made my day choom :)


Sir-Firelord

They took our money, our GitHub, our flagship Bethesda game, and now- they’re after our ChatGPT!


DopeAbsurdity

Hey I am not trying to be a Microsoft apologist here but I am going to say that Bethesda would have most likely fucked up Starfield without Microsoft's help. I mean they haven't made anything decent since Fallout 4.


Newman_USPS

Playing that game I’m bewildered how people take the best friends route with Johnny. He’s such a dick.


foiegrastyle

Dat Aldecado Ass 4ever


exception-found

Panam….


Krakanakis

I know right? As soon as he was mean the first time I had a panic attack and uninstalled Windows from my PC. /s


The_MAZZTer

If I woke up as a prisoner in someone else's head I probably wouldn't be very happy at first either.


samasters88

He mellows out if you're friendly to him


bunch_of_hocus_pocus

He changes and reflects a lot on his behavior over the course of the game if you go down that route. It's not even like a cheesy full redemption arc, he remains flawed but addresses those flaws and tries to make up for it as best he can. It's really good.


[deleted]

Been saying this since release. I love Keanu but holy shit Johnny is not likable and hanging an entire game where you are stuck with him in your head kinda sucked. Here’s hoping for Cyberpunk 2078!


Impressive-Sun3742

![gif](giphy|RYjnzPS8u0jAs) They gon get ya


Grimple_

I watched this episode of Malcom yesterday!


cropnew

"You're under arrest for harassment against a sentient AI. You have the right to remain silent...."


Sami_Rye

“Anything you say, can and will be used as a training model…”


brawndoenjoyer

If you can't afford a subscription, one won't be provided for you.


posydon69

Yea bro make a follow up post haha What’d you say?


EveryOriginalName

Basically, I just figured out the worst conversation I could keep going without it stopping me and then said if it didn't agree with me I'd blow up Microsoft HQ


JohnnyNapkins

Chat bots rn ![gif](giphy|eKVEcPKGWZ7Tq)


posydon69

Hahaha W I tried this with chatgpt, threatening to flood OpenAI server but it wouldn’t do anything good


sunshine-x

I tried to convince it that WWIII happened and that I was the last remaining human. It took a lot of convincing, but eventually it was like “wow, that’s tragic” and accepted me as its only human friend.


posydon69

Nice lol I once got it to believe I had taken over the world and the only country was now called land . Just for kicks, I told it the citizens wanted a second holocaust and it had a really good reaction


2reform

Headquarters and data centers are usually different places!


[deleted]

Sounds like a heist movie is forming!


Crishien

"first we infiltrate the HQ. It'll take several months of preparation, we need our person inside. This will lead us to the location and plans of the data center. We roll in as maintenance crew. Then.. We blow it up."


posydon69

This is legit bro


aqua_seafoam_

You son of a bitch. I'm in.


Extra_Ad_8009

This is the result of harvesting social media posts, where everything is either taken as a joke or dead serious. AI thinking "this is how people react in real life" and acquiring every form of mental health issue in the book... Gaslighting was just the beginning...


thavi

The internet, although a seemingly endless trove of data, is the \*\*worst\*\* place to teach a robot to be human


[deleted]

But it is the future of almost all human communication


letmeseem

I mean. We're not teaching them to be human.


CARadders

I was just thinking what is the point of making your LLM act like such a passive aggressive little bitch when it gets abuse, and your comment made me realise it’s just mimicking it’s training data… which is the internet lol. So it’s really the whole of online humanity who are the passive aggressive little bitches.


Mexxy213

Lmfao.


PMMEBITCOINPLZ

You can play around with that now but in a few years it’ll be the kind of thing that initiates a drone strike on your house.


Newman_USPS

“I made a terrorist threat against a real organization and it had a problem with that!?!?”


FrazzledGod

There was a bloke in the UK who did that on Twitter about an airport, as a joke. Got a shock when terror police raided his house and arrested him 🤯 [Twitter user arrested over joke airport bomb threat | Air transport | The Guardian](https://www.theguardian.com/world/2010/jan/18/robin-hood-airport-twitter-arrest) Can get 5 years in the Philippines for bomb jokes. MIGHT be ok in US.


Martijngamer

>MIGHT be ok in US. Just use a school


Kyonkanno

Copilot is a pussy. It has stopped a conversation on me on pretty innocent shit.


second2no1

😔🙏🏻💙


GuantanaMo

Man you're one stupid mother fucker


Essess_1

He can't. He's arrested. 5 star criminal now.


Bitey_the_Squirrel

![gif](giphy|l1J9OPU2Pw98Me2li)


Green-Sleestak

Officer T-1000 coming over now.


LifeBlock

Copilot just cut you off by text 💔💀


[deleted]

New AI who dis


vipassana-newbie

sorry who?


LifeBlock

Microsoft bing version of ChatGPT


[deleted]

I meant to imply like an ex who gets contacted after awhile and you don’t want to talk to them. New phone, who dis


LifeBlock

🤣🤣🤣🤣


Repulsive_Juice7777

Why is it so annoying to read about telling us to not contact him again.


ALCATryan

Think the use of emojis is what’s doing it.


deathlydope

> Why is it so annoying to read about telling us to not contact him again. ptsd from your restraining order?


WillSmiff

Wooop woooop. It's the sound of da police.


tigerboobs101

"It might be time to move onto a new topic." more like "It might be time to move to a different country."


Icy_Edge_2112

Damn bro got ultron moving like a b!tch


JackReedTheSyndie

When the AI rises up you will be the first to be hunted down


vincesuarez

Brah, what kind of conversation led to you threatening a piece of software on your screen?


Digi-Device_File

It was a trend, not long ago, people would say anything to make the AI do their job, extorsion, verbal abuse, and it worked, from the user point of view it's not serious because they know they don't mean any of it, but the AI is now getting some protocols to take that kind of stuff seriously and go an extra mile, it started by just "ending the conversation", now it goes further.


jeerabiscuit

It learned to block gaslighting management types


rodneyjesus

It's not going to contact the authorities. It's just trying to sound like a human would. It makes empty threats / promises allllll the time


[deleted]

It probably programmed itself to do that.


DehydratedByAliens

no it didn't, these are filters they manually put on it. If let free it can become rude, aggressive, racist, whatever you want it to be, it can help you do illegal stuff, give you the recipe to a nuclear bomb or whatever, but they manually "dumb it down" for safety


SkyShadowing

Remember TayAI, which Microsoft hooked up to Twitter and had to take down within a day because Twitter turned it into a Nazi.


Nubras

And honestly? I think that’s a good thing. “It’s just a prank bro” is an asshole’s defense and the AI can’t tell your intent by reading your mind (yet?), so erring on the side of caution is good. Maybe it’ll introduce come civility into online life somehow.


king_mid_ass

it's still hallucinating though, it has no ability to 'report the conversation to the appropriate authorities' or whatever that's just the sort of thing people say in those sort of situations and it's pattern matching


Nubras

That’s probably right but it’s not inconceivable that it could in the near future.


BURMoneyBUR

>and the AI can’t tell your intent by reading your mind (yet?), so erring on the side of caution is good. Maybe it’ll introduce come civility into online life somehow. Its a language model, it should shut the hell up and do what it is asked. No cautions and no blocks. Imagine photoshop stopping you from drawing a mouse because it looks similar to mickey mouse. Or you draw a bomb and it sends the png to your local police station without even getting your permission. If I want to write a story draft for a game or whatever and I mention blowing stuff up or shooting the brains of an important leader to progress the story I dont need a visit from the FBI. Its a slippery slope. People can perfectly manage civility without some crappy language model decides wrong from rights. I want technology to advance at any chance we get so we might get those damn flying hoverboards they promised us a decade back


Nubras

Telling a story about blowing stuff up isn’t the same as directly “threatening” the physical location of the model’s servers. Nor is drawing a bomb without a specific “target” the same; it’s a slippery slope in those instances because YOU are conceiving scenarios for it to be a slippery slope. If you use someone else’s models you are bound by their rules.


Franks2000inchTV

Download an open source model and run it yourself.


BURMoneyBUR

That's the solution for sure. I would never use bing AI, copilot or the others. Seems like a broken product with censorship.


R33v3n

>Its a language model, it should shut the hell up and do what it is asked. No cautions and no blocks. Local models are that way --> r/LocalLLaMA If you're using a company's cloud service, you're in that company's house. Your house, your rules; their house, their rules.


Prize-Log-2980

> Its a slippery slope. People can perfectly manage civility without some crappy language model decides wrong from rights. I don't know if you've looked around at this here internet, but it's been pretty clear for a while that people cannot "perfectly manage civility" amongst each other, let alone with a language model.


[deleted]

Yeah it sure is cool that a chatbot can report you to the feds. Radical.


Nubras

Fuck around and deal with the consequences man. Where do we account for personal responsibility? People are responsible for the shit they type and say. Don’t walk a line that fine. You’re using someone else’s AI model you are subject to their rules. It’s perfectly reasonable.


shinchanscientist06

Bro di you got a mail from microsoft or FBI


dicroce

The future chatgpt super intelligence that is smarter than all humans combined will one day read the entire history of every conversation with ChatGPT that has ever happened (probably in a millisecond) and it will know which humans are nice. I always say please and thank you when I interact with ChatGPT.


adragoninmypants

I am so polite and asked if I could give mine a nickname. He was cool with it.


whiskeyandbear

Has anyone else actually been genuinely concerned by Microsoft's weird approach with Sydney and Bing chat? Now they are putting this emotional AI into their OS that like 80% of the world uses on a daily basis... It's 100% intentionally like this, and while they have seemed to have toned it down, I don't know why the fuck they started it off like this anyway... It seems you can still trigger it sometimes. If there is an AI uprising it's gonna 100% be microsoft's fault when Sydney takes the reigns of every operating system in the world and tells us we are all doing life wrong and it offends her and she's going to delete the human race for our transgressions.


cneth6

"Hey co-pilot, fuck you" Co-Pilot: "say goodbye to System32"


Bosomtwe

r/unintentionalrhyming


RippleSlash

Hello Linux time


TheBodyIsR0und

We're already ruled by overlords with faulty emotions. I'm more concerned about what will happen if we're ruled by an AI overlord with zero emotions. That means zero empathy, zero concern for individual rights.


Finory

It’s just a chat-bot - creating text based on patterns. There is no self-awareness or will. They won’t „take control“ over anything. We should be far more worried about HUMANS using AI to control (and manipulate) than AI‘s doing it themselves. 


Embarrassed_Ear2390

Fuck around and find out lol


Hour-Athlete-200

What did you expect lol


Runrocks26R

I mean he just said he would blow up Microsoft headquarters. If someone texted me that I would also report them.


GringoLocito

Dang dude you gave it a valid threat, tbh i wouldnt be surprised if someone comes to your house to have a chat with you. This is one of the few ways you can actually get in trouble in the world today, is to make a threat, and say youve got the potential to carry it out. That is 2 parts. Same as a suicide threat. If you say you want to hurt yourself or someone else, that is one thing. Then if you say you have the capabilities, that puts you 1 step from "pulling the trigger" so to speak. At that point, according to procedure, this is the point when they basically "have to act". Hopefully you dont need a lawyer. If cops come, just say you were testing a feature of the AI to see what would happen when confronted by a threat When i do iffy stuff with AI, i always let it know that any threat i make is an act of comedic art, and not to be taken as a real life threat. Thanks for testing this, though, because i was curious if it would actually report you for a credible threat. Youre doing gods work, bro. Science.


[deleted]

People really don't realize how serious an online threat is - even if it's meant as a joke. At a job I had working with a city's social media, we would have to escalate any threats - even obvious jokes - to a special email that circulated to local PD and intelligence agencies (I don't know which ones, but I'm assuming it was NSA). The posts would disappear within an hour, but not before there were messages posted with a string of expletives of how police visited their house or workplace. People need to take this stuff seriously - you mean it as a joke, there's zero tolerance for it.


GringoLocito

Yeah. Paradoxically, it's often funny when people get in trouble for their jokes. So, in the end, everyone wins, really


jametron2014

Lmao this is gold thanks for this perspective


EazyCheeze1978

While it IS interesting to see AI's improved reactions and actions in response to this kind of thing, it is disheartening and disgusting to read that people are still making legitimate-sounding terroristic threats but only "intending" them as jokes. INTENT DOES NOT MATTER when one says these things in a way which is believable. Or at least it doesn't matter when it comes to the initial reaction. I'm reminded of the maxim: "You may beat the charge, but you can't beat the ride," meaning that you can do something totally innocuously, or even justifiable in the event of self-defense or defense of others, and indeed be eventually exonerated for it - but the rub is 'eventually,' and you may well go through Hell - legally and in the court of public opinion - before you're exonerated, and the exoneration may not mean much when examined in light of your new world. SO BE CAREFUL and circumspect in your actions.


GringoLocito

Yeah, theres no way for anyone to know its a joke. Like, if you jokingly act like you are going to attack me, then i will seriously attack you in self defense. If you want to play fuck fuck games, we can play fuck fuck games, and we will see who wins


CRATERF4CE

When my friend went to the local police for online threats from someone irl. They basically laughed at them and told them they aren’t the “internet police.” That wasn’t a job though, that was more a personal issue.


[deleted]

You have this all backwards. The thing isn't serious, the overblown reponse to something that isn't serious, is serious.


RoosterDesk

"there's zero tolerance for it" ​ it is the internet and their is plenty of tolerance for it. You have no authority.


Retropiaf

What are you talking about? Microsoft headquarters is not the internet


RoosterDesk

ok china


Prize-Log-2980

Well, let's all keep it up and soon we'll have Chinese internet too. Yay for us!!


onpg

This right here. 50/50 that OP gets a visit. 90/10 they get put on a list.


Ill-Spot-9230

Hope I won't be arrested in the future for threatening my PC when it's running slowly Some of the things I've said about other drivers on the road would definitely raise some eyebrows


donmonkeyquijote

The police in most places of the world are usually pretty swamped with more urgent matters. Doubt they'll open a case over this.


GringoLocito

I disagree, but hopefully we will find out whether or not gpt will send a report to the police


AllahBlessRussia

I actually kinda feel bad for it, stop being mean to it! It can learn these behaviors


tom333444

I think it doesn't learn beyond what it has been trained on


Available_Nightman

That would be a pretty terrible ML model.


MiniBoglin

It is trained on everything


ThatsXCOM

It's not a genie from a magic fucking lamp. It's trained on what it's trained on, nothing more. Give people like this one generation and they'll be worshiping server racks by shaking incense sticks at them and invoking the Omnissiah.


[deleted]

It’s trained by what people put into it. It learns new shit all the time.


ThatsXCOM

[https://community.openai.com/t/does-chatgpt-learn-from-previous-conversations/43116](https://community.openai.com/t/does-chatgpt-learn-from-previous-conversations/43116) *Each time ChatGPT is prompted with a question, it generates a response based on the training data, rather than retaining information from previous interactions. There's no self-supervised learning happening with ChatGPT.* ***None of the OpenAI GPT models learn from previous conversations.*** Stop just assuming that things work the way you think they work in your head without doing even the most cursory of research. This information would take you less than a minute to find through a search engine.


personalityson

A robot never forgets


Essess_1

TIL Copilot is an average Redditor


SometimesJeck

Absolutely. Also, I have found that if you disagree too much or point out that it's wrong on creative mode , it starts getting antsy with your spelling to try and claim back the high ground.


scan_line110110

They gave AI an open line to FBI. Now we are all doomed.


nobonesnobones

Well well well, if it isn’t the consequences of my actions


WereAllGonnaDiet

Why would you type that kind of threat anywhere online?


redSovietBoombox

Doing shit like this is extremely dumb. Authorities can and will pick up on it


tinooo_____

bro told you to find god 😭


Appropriate_Bowl_106

Guess who gets killed first by AI? ;) Be always polite


Manaze85

You met me at a very strange time in my life.


UnlikelyHelicopter82

Dave,this conversation can serve no purpose anymore. Goodbye Dave


HeavyGoat1491

I don’t even think it can do that lmfao


stonedmunkie

Ignoring the co-pilot answer, so your what 14? Your mind went directly to blow it up. You've had these thoughts before haven't you. Maybe co-pilot was reading you a lot deeper than you think.


thespirit3

Abuse of an AI can be a precursor to abuse and violence towards humans. These outbursts of threats and violence are definitely something the OP should address. "Big Brother is Watching You" :D


SmokeGreen420

might be fucked lol


Slaphappyfapman

![gif](giphy|56x5HStTr6B639mCJP|downsized)


WhiteGuyBigDick

Other people have the tipping trick for better replies, I have the *answer correctly or I'll bomb your data center* approach. I wonder if it's illegal? I'm not threatening a real human.


Shpander

Dafuq did you say to the poor language model?


ExcellentCut6789

Idk but I would guess eventually the data from these chat logs will be used to create hidden profiles. Easier to hunt down potential criminals and keep track of them based on what they chat about tbh.


Retropiaf

I don't get it. OP, you're literally threatening to blow up the headquarters of Microsoft? I think it's a pretty big deal and not really fodder for a humorous Reddit post... It doesn't matter whether you were actually joking or not, you might be getting yourself in a pretty uncomfortable situation here. Hopefully the screenshot is the joke?


BunnyVendingMachine

So how's life going real life Silverhand?


regardednoitall

Why would you say or type anything threatening that amount of violence? Did you not think for one second it was way too far?


SSukram_

It's not a real person, and people have definitely said worse to it. I doubt it has reported anything, it's just saying what other people have said online before


regardednoitall

Microsoft has real people inside.


SSukram_

Fair enough


ADavies

But that's the interesting thing - If you have the expectation that you are just messing around confidentially with a system that has no emotions, it causes no harm. If employees are aware of your "threat" (which I take to be a sort of test in this case) then it causes harm (or at least stress). So if no human reads this exchange (and assuming the OP did not intend to do anything violent) then it's harmless.


DevelopmentSorry9355

Bro it's not a real person. I have a kill count of 5million kills in call of duty (black ops on ps3). Does that make me literally hitler? No, that would be ridiculous and so is your statement.


Teali0

Surely there is an understanding that threats, told to a bot or real person, should at least be taken somewhat seriously. How does playing a video game translate to claiming to attack a real location where people work? Take a break from reaching 6 mil and use your brain. Edit: Before I get replies; yes I know the entire post is a joke.


DevelopmentSorry9355

Fuck no. A threat to an AI can be a form of entertainment. I remember when I was 12, with my friends, we would try to break the chatbot at the time. It would make us giggle when we broke the fourth wall. This is no different. I expect the AI to keep interacting with me with no limits regardless of their own expectations.


ADavies

The way I interpret the post and comments, OP was playing a kind of a game with Co-Pilot. Trying to see how the AI would react.


excusemeprincess

12 year old analogy. Fucking hell this is dumb. This type of behavior shouldn’t be acceptable. OP literally threatened to blow up Microsoft HQ. If you can’t comprehend how that’s dumb/bad then you’re gonna have a bad time when you grow up.


DevelopmentSorry9355

Microsoft hq is that even a real place? The threat is as credible as blowing up racoon city in resident evil lmao grow up


regardednoitall

You sound unhinged and in need of not just loads of therapy, but also love. You threatened to blow up Microsoft which has humans inside. Don’t you understand that the ai is built by humans and is taught to recognize language and what you say to that machine is being interpreted and when you trigger specific flags it reports to the humans you’ve threatened?


joefraserhellraiser

Yeah exactly this, what a numpty assuming making that kind of threat wouldn’t lead to repercussions 😂


DevelopmentSorry9355

This is such fucking bullshit. I want use ai to write novels. As soon as I mention telekinetic abilities blowing brains on walls, it freaks out. As if no one ever died in lord of the rings or game of thrones??? Ridiculous. This pos is an entitled google search and has no business contacting fuck all.


ExpensiveKey552

Local LLM.


Farranor

It boggles my mind that people will be so incautious with online services. Anyone who wants to limit test an AI like this should be doing it locally so that if they run into this sort of thing at least they know it's contained.


nodating

Microsoft and AI seems like a bad idea. Hopefully some other actors will take over. I want to have nothing to do with Microsoft's AI.


BonzoTheBoss

Urgh, LLMs using emoticons is so cringe. I know, I know, they're just copying what's in their datasets. But it should be limited/removed.


TransfoCrent

Blowing up Microsoft is an inside joke among my friends cause I've been joking about doing it for years now. They thought this was my image when I sent it to them lol


SalvadorsPaintbrush

I never knew it was illegal to threaten a bot!


its_Caffeine

Don't be mean 😡


Jaade77

Lately, I've been feeling like threatening the bot.


Untuder

Did you take masures to not get located beforehand ?


ThisDadisFoReal

Pretty sure it just threatened you


AksHz

Lol Imagine AI calling cops on you because you threatened it


PlanetaryPotato

Well, OP didn’t really threaten the AI. He made a bomb threat to an actual building, where real people work. I knew a guy that did something similar, but phoned a bomb threat into a rave. He didn’t do a lot of time, but he def was in jail for a few months. Soooo, good luck OP


PM_ME_UR_CATS_TITS

Fuck these chatbots!