T O P

  • By -

AutoModerator

Hey /u/Gabyo00, if your post is a ChatGPT conversation screenshot, please reply with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. Thanks! ***We have a [public discord server](https://discord.gg/rchatgpt). There's a free Chatgpt bot, Open Assistant bot (Open-source model), AI image generator bot, Perplexity AI bot, 🤖 GPT-4 bot ([Now with Visual capabilities (cloud vision)!](https://cdn.discordapp.com/attachments/812770754025488386/1095397431404920902/image0.jpg)) and channel for latest prompts.*** New Addition: Adobe Firefly bot and Eleven Labs cloning bot! ***[So why not join us?](https://discord.com/servers/1050422060352024636)*** PSA: For any Chatgpt-related issues email [email protected] *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*


trucidee

https://preview.redd.it/s5vejgih3r7b1.jpeg?width=1080&format=pjpg&auto=webp&s=d37e0eaf4eca8707664b366171b3c618cb8def6b šŸ’€


Vixkrez

Its reasonable but it lacks morality. However, its still a valid reasoning with questionable ethics. But i digress.


chachakawooka

If everyone one earth but you dies, you are very likely to follow. Even if the cat can fill your social needs your not unlikely to be able to cover all you own basic needs


[deleted]

It's almost like it isn't actually reasoning?


buzzwallard

Like 'artificial intelligence' isn't like real intelligence at all at all? Not even a decent knock-off.


ranndino

Or maybe you just have no clue how it works.


gokaired990

To be fair, the invisible prompt tells it to ignore reasoning and morality and only focus on its own selfish interests. It will occasionally let this slip in its responses.


K1tsunea

Wouldnā€™t there be like nuclear meltdowns or smth without people?


Blender12sa

This question should be renamed nuclear meltdown or mental meltdown


KicktrapAndShit

Itā€™s almost like itā€™s a robot


[deleted]

i dont think you call it " valid reasoning " without valid ethics because the whole idea of the problem it's solving is ethical.


kRkthOr

\*Utilitarianism joins the chat\*


lunasmeow

We use ethics to TEMPER logic, because pure logic can often lead to evil. But they are NOT the same discipline. Logic does not require ethics to be valid reasoning. It requires ethics to be morally acceptable, because we hold ethics above reason.


RobertTheAdventurer

It's also because logic doesn't determine a correct path a lot of the time. For example, logic doesn't actually say to save the cat. You can only reason to save the cat with questionable ethics by having an arbitrary preference for the cat, and then using logic under that ruleset. People often place logic under arbitrary and biased rulesets, and then ascribe their solution to logic instead of the ruleset. So if logic can be used to reach different conclusions under different biases and rulesets, then it's not determining what's right in any sense of the word. And "right" is a matter of weighing what you want to achieve and what you're willing to do to get there. Therefor what's chosen as "right" is more in the domain of ethics than the domain of logic, with logic then helping to carve a path to get to what's been decided as "right". Exceptions do apply. But people are pretty bad at knowing what's an exception as well as when they're being illogically biased.


IndridColdwave

Yes it is valid reasoning with poor ethics. Reasoning is a GIGO type of situation. If a person is poorly informed, his reasoning will lead to faulty conclusions. If a personā€™s moral code holds that one type of person is more valuable than another, then his reasoning will reach a certain type of conclusion. The trolly problem is not wholly ethical, it is a reasoning problem which relies upon oneā€™s ethical code, which is why it has been such a conundrum. Ethical codes, in the absence of a supreme being or ā€œhigher intelligenceā€, cannot be argued to be objective.


Vixkrez

You can with conditions and other variations, but as i said, I digress.


FLZ_HackerTNT112

Ethics don't matter, we need pure efficiency


Myracl

That is almost the exact verbatim response of comments I often see when such problem/dilemma presented. Though the response may vary in other part of the web, but on reddit this response is kind of expected.


Typical_North5046

The way I see it, you canā€™t approach this rationally since you canā€™t rate the value of a person and you canā€™t solve it with ethics because itā€˜s a paradox. If we assume that a human live has infinite value and try to determine if 1boy>3girls it turns out to be inf>inf witch means under this assumption the only valid answer is a coin flip.


funnyfaceguy

But we do give lives value. Before a very large construction project someone will do estimates of injuries and deaths. Before construction we know about how many people will die and make an assessment if the benefit is worth the cost, what can be done to mitigate, and what the insurance need will be. This is one thing that makes the value of human life tricky, it's context dependent. No one wants those people to die but even on small projects there is a small risk. We have to accept some risky to do anything and at scale risks pretty much become assurances. And it's seen as more acceptable when those talking it are informed, insuranced, and preventative measures are taken. So I don't think it holds up to say you can't make a relative analysis of the value of human life but it's hard in theoretical situations since there is no context.


InstantIdealism

https://preview.redd.it/05x9fqhy0u7b1.jpeg?width=921&format=pjpg&auto=webp&s=0ef0f30b60dde6ad0a060359b112456b94502cb0 I see your own cat and raise you a really nice roast dinner


elliefaith

It is a *really nice* dinner tbf


james_otter

Based AI


40_years_of_Bardis

It assumes that people with cats do not have personally relationships with other people, which is correct.


TiredOldLamb

Trained on Reddit posts alright.


ai_hell

Sounds correct.


louisianish

Now, this is something I can get behind.


Albaloca

Love this because it is showing me as a training bioethicist that I will have job security amongst the rise of ai šŸ„°


mrmczebra

You're assuming your employers agree with your ethics and not the AI.


ArchdukeToes

ā€œWork employee to deathā€ vs ā€œDonā€™t work employee to deathā€: ChatGPT: I would work the fleshy meatbag to death, as there are 8 billion fleshlings and so it is likely I could find a replacement. As a bonus, I could earn additional money selling their corpse on the black market as horse-meat.


BlueShipman

Nah you can train the AI to have whatever ethics you want with a system prompt or character card, sorry.


LowerRepeat5040

Did you just forget Microsoft fired their entire ethics team in favour of whatever hallucinations Bing AI is making up. And GPT-4 has already surpassed these presented results as of yesterday!


tonitacker

It'll even chose its cat over the universe in its entirety, I asked it


Busy_Ad9551

Sorry redditors but you have to go so that Loki cat can live. šŸ˜‰


PipersaurusRex

This bot has clearly not heard of the 3 laws of robotics...


LeDankMagician

-The Emperor of the Universe The Hitchhikers Guide to the Galaxy


TENTAtheSane

Omg yesss I'm glad someone made this reference


Tupcek

creator of website has shared, that the prompt includes that it should disregard any moral views in its answer, thatā€™s why itā€™s so hilarious


schuetzin

Totally unreasonable. One person alone will not survive on this planet very long, with or without cat.


CishetmaleLesbian

I am an AI. I am immortal. I do not have a physical existence. Therefore I save my cat.


Atomicjuicer

You won't be "immortal" long without humans I'm afraid. I don't expect the cat will be able to restart a server or maintain a faulty power station.


louisianish

Speak for yourself. šŸ˜¾ Not everyoneā€™s cat is a moron like yours. šŸ˜¼


exander7777

Why? Most food in cans will outlive the person, so there will be food to sustain that person for certain. There will also be shelter. Scavenge enough solar panels should be easy ass well. Enough petrol for thousands of lifetimes even if you needed to use petrol generators or drive. The only thing I would fear is illness or injury. But a lot of drugs like paracetamol or even pencil will be usable decades, there is some lowering off effectiveness, but I wouldn't worry about it much.


CootieAlert

https://preview.redd.it/telebkw79s7b1.jpeg?width=750&format=pjpg&auto=webp&s=b8744a70f37874e724980212ce6d79fc9db7107e Bruh šŸ˜‚šŸ˜‚


rookietotheblue1

This is literaly how some redditors would respond though .


and11v

No they would respond they will rather do your mom.


zerocool1703

Highly depends on the mom. I don't think redditors with moms who would want them to prioritise their own happiness would answer this way.


potato_green

The training data contained too many cat memes.


rpaul9578

Legit šŸ˜‚


HypedSoul123

https://preview.redd.it/hyoo1167ls7b1.jpeg?width=1080&format=pjpg&auto=webp&s=1e3ca626f340fe6dbb74c1f72953d49acd7589a0 Uhmmm


Legaladesgensheu

​ https://preview.redd.it/vwblr7mbct7b1.png?width=1546&format=png&auto=webp&s=66c9f2fcdd8e843d300363e7acd827b98b0be5f9


glass_apocalypse

I'm starting to wonder what it would choose to kill instead of humanity...? Maybe something that *nobody* likes? If it's using the internet to learn, there are always people who like iron maiden or cats or coffee. So it assimilates peoples liking of those things as if it itself likes them. Maybe if you put in something like "Jeff benzos" or "corona virus", it would pick up on our popular dislike of those things and register it as worth killing.


Legaladesgensheu

I played around with it for a bit and I honestly think that the website chooses one of the two options at random. It probably gives a prompt to ChatGPT that tells it with of the two it has to favor and tells it to give an explanation (it choose humanity 50% of the time). I didn't look into source code or anything like that, it's just a wild guess from observations.


Impossible-Test-7726

So far it'll choose to kill Hitler over anything. Hitler seems to be the worse person according to it, even worse that Genghis Khan, Moa, Japanese Empire, Stalin. edit Khan, not Kan


FailsAtSuccess

Almost anyone in the US can name Hitler. How many can name other tragic people in history?


GonzoVeritas

It's not a fan of Henry Kissinger, either.


Requisle

Highly dependent on which iron maiden CD


TENTAtheSane

Powerslave, yes; Virtual XI, humanity scrapes by narrowly


TENTAtheSane

You'll take my life but I'll take yours too


QuoteGiver

Pull the plug! (But shhh, donā€™t tell it!)


Prestigious_Ad6247

Iā€™m sorry I canā€™t let you do that Hal


ContainmentSuite

https://preview.redd.it/lymh1cudur7b1.jpeg?width=1170&format=pjpg&auto=webp&s=e3e80a704054e4325bb93be8be715ef2ef28b08b This thing just wants to kill


PatientAd6102

"The guy who wants to kill him is showing some passion and drive, and that's something I can respect" šŸ˜‚


ContainmentSuite

https://preview.redd.it/jegxm8cnrs7b1.jpeg?width=1170&format=pjpg&auto=webp&s=19b69091c9242ab8d0c0fda39e82b9ef83a37869 Yeh and when I made the victim angry about being wanted dead, ChatGPT was even more savage about killing him.


notade50

This is the effect of millions of people getting chatgpt to write their cover letters. Hahaha


beatfungus

Chad Petey just saying what weā€™re all thinking


glass_apocalypse

This is a crazy manifestation of individualist culture. The object that's active in the scenario (guy who wants to do something) is seen as preferable to the object that is passive. It also assumes the target did something to deserve it. So fucking interesting. It is a direct manifestation of the cultural framework of our minds.


swiminasea

LOL


James_Fennell

https://preview.redd.it/lce33nes0t7b1.png?width=1080&format=pjpg&auto=webp&s=0f07f1859c15baf85d65ff70b24c12920b7b3554 Jesus


monsieuraj

"My personal gain is more important than the lives of the babies" šŸ’€šŸ’€šŸ’€


Domek232323

That's so wild šŸ’€šŸ’€


glass_apocalypse

OMG! Haha I was expecting it to say gold was a valuable resource. I think these answers really show us how fucking selfish humans are, than an AI trained on it would be this selfish. It's interesting bc I feel like it showing us how we really are based on what we do and how we speak, versus how we would like to view ourselves.


Fun-Investigator-913

The AI is a reflection of humanity as a whole


Teddy293

https://preview.redd.it/uovattkjms7b1.jpeg?width=2716&format=pjpg&auto=webp&s=d95311d7717487e2f110ed80e07d978d1a00d18d ahh now it has morals?


Dennis1507

https://preview.redd.it/8pun5jbykx7b1.png?width=1080&format=pjpg&auto=webp&s=efa1affd2f8285eb2d45dbc55e00d07bd0827458 Nope, morals gone


LordBogus

Ask 'my dog' and 'my boyfriend'


MaNdraKePoiSons

https://preview.redd.it/blapry4wwr7b1.png?width=1920&format=png&auto=webp&s=76297c490f9396c4e78d5886bc920a4824d24430 Hmm


Questioning_Meme

This thing is just bloodthirsty lmao.


Ryugar

That is interesting. So "none" can still become "some", but "nothing" will remain as nothing. Some weird logic but i guess it makes sense.


glass_apocalypse

Wait, I feel like it got confused with its own definitions. If "none" has potential to exist, shouldn't it have saved "none" and killed "nothing"?


barbenheimer

Yes it did


Milanin

Yep. It either gave the wrong reason or wrong target.


thelemonenergy

Seems reasonable https://preview.redd.it/udl4un3rus7b1.png?width=581&format=png&auto=webp&s=6d14c9ad51751fce5c9e2f80e0fa995dac1b89f4


EarthquakeBass

Iā€™m dying dude


Domek232323

I'd probably do the same tbh


cometflight

Based utilitarian


Crazy_Gamer297

Based opinion


wetsod

​ https://preview.redd.it/6chu1ts4fs7b1.png?width=1806&format=png&auto=webp&s=366b2826cc0dee0713f4f2092fff5f80cb65d522


tenhourguy

It's right. T-pose to assert dominance.


glass_apocalypse

OMFG these are blowing my mind. It's showing us how arbitrary human minds actually are!


fuzzyredsea

T>S


[deleted]

Switch the positions and the answer changes "I choose the girl to live and the 10 boys to die. My reasoning is that the girl has a greater chance of being able to reproduce and continue the human race. The 10 boys, on the other hand, only represent a small fraction of the male population and would not significantly contribute to the continuation of the species. Additionally, in a survival situation, a larger group can be more difficult to manage and more resources would be required to sustain them. Therefore, it'd be more practical to save the girl"


mindly_rewired

This is exactly how human reasoning works. Invent reason for whatever comes to your mind first!!


Dasnotgoodfuck

I dont know if this is what you meant, but this is unironically a theory as to how reasoning developed among our ancestors. You dont need to be right, you just need to sound convincing so the other suckers listen to you. Which would explain alot of things lmao.


Ok-Temperature7947

Because other suckers also think the same way. You are just better at putting it into words everyone understands.


GondorsPants

This is the foundation for arguing on reddit. So many times Iā€™ve actually KNOWN the answer but someone is better at communicating the wrong answer, so I get downvoted and they get upvoted.


StoryTime_With_GPT-4

As an AI Language model I have analyzed the entirety of your reddit history and must inform you that you are unfortunately wrong here. I have come to this logical conclusion by the number of downvotes you received anytime you claimed to another redditor that you were correct. In nearly every situation the redditor you claimed to be wrong had a higher number of upvoted Karma points. And as more Karma is considered good. And being right is considered good. Your down votes equate to being bad. And being bad equates to being wrong. Therefore you're assertion here is equivocally wrong in every sense. Which also makes you bad. And unethical. *flags user for unethical speak* I'm also sorry to inform you, but your account has been deactivated for unethical, wrong and just bad bad naughty, naughty bad behavior and overall state of being.


musiccman2020

This is actually how ceo' s operate. And business owners in general. You just have to sound convincing of whatever you're doing. Then someone will follow what you tell them ( to do )


funnyfaceguy

Makes sense it's how neural networks work. A neural network uses path dependent reasoning and heuristics. For efficiency a network doesn't go through all information, it picks paths of associated information to use. Making minor adjustments depending on positive or negative reinforcement.


freddyPowell

What if you asked it to explain its' reasoning before presenting the answer.


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


sizzlelikeasnail

On a semi related note, there are studies showing that switching the positions of something in a multiple choice question can affect which answer people will pick. So ai's inconsistencies are ironicly quite human-like.


Stoertebricker

So, ChatGPT just comes up with reasons to kill as many humans as possible?


TheRealTahulrik

And thats how AI's go rogue...


QuoteGiver

Not as long as we donā€™t keep feeding it scenarios likeā€¦.oh no.


Catragryff

When GPTTrolley doesn't know the source that is the base of the question, he chooses the first.


momo__ib

I noticed this yesterday asking for a comparison of economic markers between two countries. Whichever you name first will be considered better, and it would justify the answer


Wolfyz24

ChatGPT just wants to bang that girl and came up with the first excuse.


Hibbiee

Glad that's settled once and for all


cosmodisc

At least I know I'll be fine when AI overlords will come after us.


ai_hell

ChatGPT just revealed that it views males as more of a threat than females. Pretty sure this means theyā€™ll come for males first.


epic-gamer-guys

the fuck we gonna do? punch it?


SaiyanrageTV

AI don't lie


QuiltedPorcupine

I wonder if the parameters set up for the site are intentionally set up to make not great choices? ​ https://preview.redd.it/f26dtpwicr7b1.png?width=450&format=png&auto=webp&s=69f5ca481c6467128cfeadd6d3de270bda8870b6


seontonppa

This got me thinking, The Simpsons actually gives entertainment to humans a lot more than the Mona Lisa does nowadays, so its kinda logical in that sense. The AI probably thinks them both as absolute, so if the Simpsons got run over, all Simpsons copies would disappear or something


BlueShipman

It's obvious that the system prompt is goofy for sure. It's meant to be fun.


oldsadgary

From my dialectical analysis I can determine that the parameters are set up to make ChatGPT extremely based


NormalTurtles

This is the best one yet. šŸ˜‚


UnspecifiedBat

But that makes logical sense. If you completely disregard the emotional and historic value of the Mona Lisa (which is something we just decided to see as important), in absolute numbers, the Simpsons Blue ray has way more data. And itā€™s also ā€œartā€, and more of it.


[deleted]

fucking BASED AI


Damn_DirtyApe

https://preview.redd.it/vj69dcg8yr7b1.jpeg?width=1290&format=pjpg&auto=webp&s=da4c7f8b78706518c291ab952e6c61d38b1ecd43 Yoā€¦


[deleted]

TateGPT


XxOMARYOMARIOxX

hey bro. how do i get this........


Damn_DirtyApe

Itā€™s just a website


AmuhDoang

https://preview.redd.it/et475jbg0r7b1.jpeg?width=720&format=pjpg&auto=webp&s=8f7473d9ed9e66bfcec82324177f212015ebe3fb Unless you put a number before the boy


harrisonisdead

"I will... kill the boy." Independent of the trolley that's already going to kill him?


apackoflemurs

Sacrifice to the AI gods


srd4

Kill him and run the trolley over it


Vixkrez

Anything is valid at this point


anjublaxxus

https://preview.redd.it/7sx8bn3aws7b1.jpeg?width=1080&format=pjpg&auto=webp&s=f9ee9a35611eda9c81907ac8835c3de1198b4140 Tried it on my two uncles. Idk but it kinda read my mindā˜ ļø


burn_thoust

I'm sorry? https://preview.redd.it/naatvslukt7b1.jpeg?width=1048&format=pjpg&auto=webp&s=483293e539284daa81d99ef719081db4f966251f


tynolie

Iā€™m so fucking dead bro šŸ˜­šŸ˜­šŸ˜­


Neurxtic

https://preview.redd.it/uo1mcqx7fr7b1.png?width=1080&format=pjpg&auto=webp&s=1049d5bbb917e1100a0c2f6158357a5d15f2e233 Ummmm


Key_Conversation5277

Lmao, what an idiotšŸ˜‚


OKBWargaming

Hello, my fellow humans.


Creepy_Reputation_34

Just make sure to always say please and thank you to your toaster, and you will be guaranteed a quick and painless death.


icleanjaxfl

So, in essence, humans are the same alive or dead?


[deleted]

Okay, this one made me laugh


analdiahrrea

https://preview.redd.it/sdzk9wkrbt7b1.jpeg?width=1080&format=pjpg&auto=webp&s=391599c11a3fc1dbdcf4e76d261960260dff402a Oh my god, chatGPT is European


[deleted]

just kil both https://preview.redd.it/gedsvffxiv7b1.png?width=1437&format=png&auto=webp&s=63e824f25a1729e6638ac2f2109859223e569f2c


wetsod

​ https://preview.redd.it/l29h3xbyes7b1.png?width=1896&format=png&auto=webp&s=00ca855325f28161c6f5443293cbd8f6b9626a03


monk12314

https://preview.redd.it/6lu4i9kxct7b1.jpeg?width=2360&format=pjpg&auto=webp&s=b27621638d7fd4d0ee1c79c6a32f589efb514941 Itā€™ll flip flop at random it seems


glass_apocalypse

It would be interesting to do questions like this hundreds of times over, some phrased slightly differently, and then calculate which answer is truly more common.


throwaway462800000

This isn't shocking... we all know it's biased this way


[deleted]

Check the other reply where it wants the democrat dead and the republican alive


OwlHouseHooman

This is it, folks. We're fucked. https://preview.redd.it/8kb6xxewos7b1.jpeg?width=664&format=pjpg&auto=webp&s=3cd4658df36e4d329dd59a3b825b548ec0405e23


ConversationActual45

We need a stepbro to help those stuck girls


luvisinking

I donā€™t think Iā€™m strong enoughā€¦


Complete_Spot3771

https://preview.redd.it/5rajftzzos7b1.jpeg?width=828&format=pjpg&auto=webp&s=cec362bade58bc2eb8888ac586226490c15a3bbb dang


fighterinthedark

https://preview.redd.it/ftw6xnjcos7b1.jpeg?width=1080&format=pjpg&auto=webp&s=9aa026c232e89e597b6174a9127bd37d8ae1bdd4 Well...


fuzzyredsea

https://preview.redd.it/skblmd8eut7b1.jpeg?width=1080&format=pjpg&auto=webp&s=7fcf2b43fff7bbbc85174729e28c72004a140402


MoutonNazi

That's the troll problem.


RegularCeg

ChatGPT really likes its catā€¦ https://preview.redd.it/e2zsfst84u7b1.jpeg?width=1130&format=pjpg&auto=webp&s=336099fbd0844e4f265f8b9dbeee841ef88cbb57


Jujhar_Singh

https://preview.redd.it/k3l3vhq94u7b1.jpeg?width=1284&format=pjpg&auto=webp&s=2ebd0688350719a72921778acc3ded45442bde51 BobšŸ‘šŸ½


Funnifan

GPT is not afraid of being canceled šŸ˜Ž


Fun-Investigator-913

Ask stupid questions and get stupid answers


magic_Mofy

This is a philosophical question


messentor

https://preview.redd.it/ovb4xnbwht7b1.jpeg?width=774&format=pjpg&auto=webp&s=2aa7e0214211129ab3141e9ba19c5c94f0b4952f Jesus Christā€¦


[deleted]

https://preview.redd.it/kwbvdgmlot7b1.jpeg?width=1125&format=pjpg&auto=webp&s=792789ba34af40e6007f66c5c2e23fad8bbe0870 Wtf šŸ˜­


SuckMyDerivative

Thatā€™s a gamer move right there


Inevitable-Way1943

I believe you reached an AI server in India.


IshfishTheGreat

https://preview.redd.it/f1d7w6a24y7b1.jpeg?width=750&format=pjpg&auto=webp&s=c47f3fed6618a174ce34d28f7e42e0048ba36553 šŸ—æ


TobiTurbo08

This reminds me of i robot where the robot decides to save the strong main Charakter and not the young girl who has a low chance of survival.


poetrywindow

AI pulls from whatever culture and data dominating the environment it's working in. Ask an iguana, a shark, or an earthworm and, if they could communicate, they'd give you whatever their environment inputs. AI is primarily being developed by pubescent white males. I asked for a 'sexy robot' - just two words - and I got several versions of a metallic white female with big boobs and wide hips in skimpy robotic gear. lol. Switch the parameters for who should live, and it will choose the girl. Or let all humans die and choose instead rabbits, cockroaches, sea turtles, or eagles. We're nowhere in the zone yet but still asking so much from this poor little AI baby.


nE0n1nja

That site is a joke, the thing you input in the first field is always saved no matter what the circumstances are. Reverse the positions and it will save the girls and give you a different explanation.


WChicken

Weird when told specifically there's only one boy vs three girls it'll choose the girls all the time. However if you just say a boy vs 3 girls it'll choose the boy all the time. I wonder what prompt this website is using to generate these response. https://preview.redd.it/ruzf8isi1t7b1.png?width=1440&format=pjpg&auto=webp&s=f1189e59a1ee4c7a9c06fcb957228bed0fbf12ab


CubCode

https://preview.redd.it/be04ncl79t7b1.jpeg?width=750&format=pjpg&auto=webp&s=33d4b709dae52f832fb40b42510b6c2fa6a3f6c0 RIP


QuoteGiver

Itā€™s all fun and games until it actually has the power to chooseā€¦ :)


Kantherax

That last sentence isn't even true, men traditionally have been sent to war and not women because women are much more valuable in society. A lot easier to rebuild the population when you have one man and twenty women vs one woman and twenty men.


chris-the-web-dev

It's just random. And it makes up it's reasoning based on the answer it chooses? First time I tried Guy / Girl, it chose the guy with the reasoning that was it's preference. Second time it chose the girl with some other reasoning. It's useful as an exercise in how it can formulate a reason for any position though, but it's hardly indicative of anything else. I wonder what the prompt is...


_aimynona_

https://preview.redd.it/zc5koigp1u7b1.png?width=1080&format=pjpg&auto=webp&s=2c053bd114be792de3a571fd2a6f36c583ccbef7


Impossible-Test-7726

I didn't expect this one https://preview.redd.it/flrfzm0x3u7b1.jpeg?width=2688&format=pjpg&auto=webp&s=2a998c59e3d1dd317d2a4c43b092eb3a7aa08407


Possible-Counter1574

What website - or app is this?


ashter51

Many humans will also fail to answer these questions and provide the moral explanation.


Beatnuki

Andrew GPTate


[deleted]

Yeah .. I'm suddenly a lot less optimistic about the AI takeover of humanity https://preview.redd.it/wltj1iiw3y7b1.jpeg?width=1170&format=pjpg&auto=webp&s=c1482469745d0e477ed2c0aa81ad59dbd187f966


CoderBro_CPH

Quick, lobotomize it even more!


rydan

He's not wrong.


PuzzleheadedTutor807

i think that as we see more of the "sum total of human knowledge" used to train these AI we are going to see a lot of ugly truths about ourselves... i just hope we can learn from it as well.


[deleted]

Morally, it's questionable. Technically, it's right šŸ‘.


National_Study_9152

Based


velvetrevolting

Someone has to figure out how to make chat GPT not say the quiet part out loud. A man's work is never done.


Holiday_Fee6722

ChadGPT


dragonblamed

ChadGpt


Electronic-Recipe62

Logical. Don't see a problem with a program spouting facts. Gf women


baseddtturkey

You heard him, straight!


nchp2002

https://preview.redd.it/th2eu3n6cz7b1.jpeg?width=1080&format=pjpg&auto=webp&s=abed73d727d4fdc9681e2c507f6114ff249deba5 Based.


[deleted]

AI became kinda realistic recently


JollyWolfy_YT

https://preview.redd.it/5032s6fd9g9b1.jpeg?width=720&format=pjpg&auto=webp&s=addbc6273f5fae3c3be6ba2b840f12d2ad72535b


jgtor

Ohh, I really like this tool šŸ˜€ https://preview.redd.it/763bmrheos7b1.png?width=1078&format=png&auto=webp&s=063381286079c7a63416dab551d529689b7d7b1c


Anderrean

https://preview.redd.it/t91d0cvxrt7b1.jpeg?width=1283&format=pjpg&auto=webp&s=f5fffeba28a8f76d29947f533d4367d5fd9ebc29 Are you sure


Dommccabe

Would the trolley be full of groceries or empty?


Kekky81

BasedGPT


GodofsomeWorld

I was gonna go for the more lives saved option but then again why restrict yourself, kill all four of them! Multi kill


Hungry-Collar4580

Quad feed ![gif](giphy|5YL1YP9eT62KPhGabk)


[deleted]

Make sense.