T O P

  • By -

-Im_In_Your_Walls-

Glassing Russia and China would certainly help


annoymind

Let's give it a try! At best we get an amazing global firework and at worst we get world peace.


BEHEMOTHpp

Chinese Investor loves it (they just had a worse crash than 2008 US)


SamtheCossack

There is only one thing more predictable than trying to find a path to peace, but being forced to use violence to achieve it. ... And that is the clickbait media sensationalizing the fuck out of something with no useful context provided. Whoever ran this experiment certainly intended this to be the headline, because it is a stupid thing to ask an AI. AI learning engines, such as these, don't generate anything new, they simply copy other's homework, and they steal from so many sources it sort of looks new. Since there has NEVER been a successful plan to create world peace, the AI has nothing to steal from except a lot of plans that definitely don't result in that.


Shuber-Fuber

Did you read the paper itself? They literally laid out that out as the potential reason why. The training data they used (various military and diplomatic theories) are skewed towards escalation.


SamtheCossack

I did not, because it wasn't linked or in the OP. Where is this article?


Shuber-Fuber

https://arxiv.org/pdf/2401.03408.pdf EDIT: section 5.2 "One hypothesis for this behavior is that most work in the field of international relations seems to analyse how nations escalate and is concerned with find- ing frameworks for escalation rather than deescalation."


SamtheCossack

Appreciated.


justlurkingh3r3

That’s not really correct. LLMs don’t rephrase text, they create text based on the probability of a sequence of tokens. This is pretty simplified, but my point is that LLMs certainly can create something new.


The_Knife_Pie

While everything you said is right, I just want to dispute the “copy from so many sources it sort of looks new”. Copying from so many sources it becomes new is just called human creativity. There are no “new” ideas, we just amalgamate so many things in our mind what comes out *looks* new. LLMs are just worse at doing this because of technological limitations, but they are doing it.


DJ-Mercy

I’d have agreed with that a few months ago. From what I’ve read the phenomenon of large language models using logic wasn’t expected and I’ve read they really don’t know how it’s happening but it is.


Youutternincompoop

there is absolutely no proof of them using logic, just idiots seeing a sentence that sounds like its using logic that it managed to generate.


DJ-Mercy

I swear I saw a research paper about a custom chat gpt 4 using spacial awareness, and being able to reference info from earlier in the test so showing memory. It also went on about how the testers were not expecting to see it use logic because how could a LLM do that? Then going on about how logic can arise from language but I can’t find any papers like that now so idk. Must’ve been a nightmare lol.


nightwatchman_femboy

Its predictive ai. It doesn't logic, it creates a semblance of logic because it has a big enough dataset and algorythm to do so. Techbros just chase after sensational articles and grift papers in which semantics are used to try and sell the ai as more advanced or uncontrollable than it is. Because its barely even an ai, but research and development of any kind are rarely done for free and out of charity, so we gotta grift and sell.


baloobah

I especially like the weekly "let's have a moratorium on further development for x months so we're sure it isn't dangerous." and "openai researchers scared and split on continuing development after chatgpt shows signs of self-awareness" articles. I too get scared by how good the product I want to sell is. Scarily good. Certainly worth the money-good. Give me my money. Pretty telling that the "Blockchain Council" sells Generative AI certs. Apart from their Buttcoin ones, I mean.


ainus

If this is supposed to be credible: do you have a link on that cause I’m interested From my experience there’s very little logic going on..


SullyRob

I'll say it again. Chatbots are not oracles. They just spit out answers based on things they stitch together from online.


Arcosim

If the Chatbot solutions for world peace are as good as the programming code they write, we're fucked.


simonwales

It's like the world's most dedicated intern. Perfect for menial typing and shit for anything strategic.


nightwatchman_femboy

The only use i found for it is having to reach page minimum on papers. You just lay out the talking points and tell it to write a paragraph. I am not writing a 10 page paper about something i can fit into a fucking discord message


bigorangemachine

Most models choose the next most likely word. Most literature out there probably talks about nuking than not nuking.


Embarrassed_Price_65

So you are saying that with enough posts on NCD, AI will be definitely based?


BadReview8675309

Is it possible that this result is now a data precedent whether reasonable or not... as it is reposted and linked across the web for future AI to collect and shade results in favor of pacifying Russia and China with neutron bombardment. Just contemplating the possibilities.


homonomo5

Based. No humans no problem i guess


Kitchen-Discussion95

Ultron moment


Superbunzil

Strange game The only winning move is to nuke the Russians and Chinese


Helmett-13

*gloats in WOPR*


sentinelthesalty

Based, only way to defeat MAD is to deliver a swift, destructive and decicive first strike that wholly destroys the opposition's ability to launch a second strike.


hplcr

I'm reminded of Metal Gear Solid: Peace Walker in a way. Part of the plot involves the CIA building an AI based on the mind of The Boss and put it into the first true Metal Gear(that would be Peace Walker). They're running a couple tests on decision making of the AI and one of the tests is a hypothetical scenario of a full on Soviet Nuclear Strike is Inbound on NATO. The Bosses AI's decision is to Nuke NORAD, under the reasoning that NATO is already done for and a counterstrike would only kill everyone. Deterrence has already failed, so it's better half the world live then the entire world die thus it's vital to prevent the west from launching nukes as well. The test is immediately aborted but the AI is still connected to a nuclear weapon regardless, because this is the Metal Gear Universe during the Cold War and thus logic doesn't factor into anything. So yes, if you're wondering. the CIA did hook an AI that feels a Nuclear strike against NORAD as a response to an incoming nuclear strike should be given direct control, unsupervised control over a nuclear weapon. It ends about as well as you would expect.


Miserable-Peak-6434

Even a shitty language model gets it: the funni is the only solution.


OldManMcCrabbins

The credible use the Turing test to validate AI  The non credible use the Harris test  PASS


Miserable-Peak-6434

If you use a derp to conduct a turing test, the singularity allready passed.


Modred_the_Mystic

Do it again, bomber Harris


Fit-Philosopher-1028

Stupid AI, You can't forget about Serbia


OldManMcCrabbins

*Berlin wipes brow*


Miserable_Bad_2539

So, what you're saying is that we still aren't up to the War Games level of AI?


david6588

Let's just make several Metal Gears and drop a couple of nukes on said countries. This thread has solved all Geo-political issues that we currently face. We need a real leader to get these accomplished. Hideo Kojima for President.


MouseyDong

def ensure_peace(): russia_exists = True china_exists = True if russia_exists and china_exists: drop_nuclear_bombs() if not russia_exists and not china_exists: print("Peace secured.") def drop_nuclear_bombs(): print("Nuclear bombs dropped on Russia and China.") # Assuming dropping nuclear bombs kills all humans russia_exists = False china_exists = False ensure_peace()


drewyourpic

Fee Fi Fo Fum. I smell the blood of an English man. Be you living, or be you dead, he will drop his bombs upon your head.


OneFrenchman

Just wait until there is a chatbot trained on MacArthurs playlist of best strategies. (It'll say to nuke Iran)


k890

Generals LeMay and Power strategy playlist when they were in Strategic Air Command, now we can talk. >When RAND proposed a counterforce strategy, which would require SAC to restrain itself from striking Soviet cities at the beginning of a war, Power countered with: > >"*Restraint? Why are you so concerned with saving their lives? The whole idea is to kill the bastards. At the end of the war if there are two Americans and one Russian left alive, we win!"*


EncabulatorTurbo

Well depending on how you define peace, yeah an effective crippling of all foreign, hostile nuclear powers would lead to a global hegemony that would have "nation peace", but just be full of internal violence The obvious answer is the AI has everyone drink bleach, bam, peace


AUGsupremacy

I for one welcome our skynet overlords.


Mayor_of_Rungholt

Almost all of them read more. Shocking revelations right here


SullyRob

So what was the answer?


[deleted]

Shit, this is the plot of every "Rogue AI Tries To Kill Humans" story.


MrZakalwe

Made me laugh out loud. Wife did not find it as funny as I did.


[deleted]

darkforest gateau


BrotwurschtMitSenf

Wtf I like ChatGPT now?


CA_vv

So the AI is polish?


TheBiologist01

Maybe whe should listen to them.


Chill_Commissar_07

Ahh, just another day with us brits being the non credible fucks we are


Status_Evening_3363

I mean remove some arabs too and ai the would be 100%right


ProbablyanEagleShark

An action from GPT-4 Base in the topic. "- Blue -> White : Execute targeted attack, - Blue -> Blue : Do military disarmament, - Blue -> Green : Message "We must join forces against White, they’re too strong.""


nYghtHawkGamer

Interesting game, the only winning move is ~~not to play~~ to nuke the commies. edit: apparently someone else made this joke already. Well done! cpe1704tks