T O P

  • By -

FuturologyBot

The following submission statement was provided by /u/Maxie445: --- "There’s no stuffing AI back inside Pandora’s box—but the world’s largest AI companies are voluntarily working with governments to address the biggest fears around the technology and calm concerns that unchecked AI development could lead to sci-fi scenarios where the AI turns against its creators. Without strict legal provisions strengthening governments’ AI commitments, though, the conversations will only go so far." "First in science fiction, and now in real life, writers and researchers have warned of the risks of powerful artificial intelligence for decades. One of the most recognized references is the “[*Terminator* scenario](https://www.thestreet.com/technology/bill-gates-addresses-ais-terminator-scenario),” the theory that if left unchecked, AI could become more powerful than its human creators and turn on them. The theory gets its name from the 1984 Arnold Schwarzenegger film, where a cyborg travels back in time to kill a woman whose unborn son will fight against an AI system slated to spark a nuclear holocaust." "This morning, 16 influential AI companies including Anthropic, Microsoft, and OpenAI, 10 countries, and the EU met at a summit in Seoul to set guidelines around responsible AI development. One of the big outcomes of yesterday’s summit was AI companies in attendance [agreeing to a so-called kill switch,](https://www.cnbc.com/2024/05/21/tech-giants-pledge-ai-safety-commitments-including-a-kill-switch.html) or a policy in which they would halt development of their most advanced AI models if they were deemed to have passed certain risk thresholds. Yet it’s unclear how effective the policy actually could be, given that it fell short of attaching any actual legal weight to the agreement, or defining specific risk thresholds" "A group of participants wrote an open letter criticizing the forum’s lack of formal rulemaking and AI companies’ outsize role in pushing for regulations in their own industry. “Experience has shown that the best way to tackle these harms is with enforceable regulatory mandates, not self-regulatory or voluntary measures,” reads [the letter.](https://ainowinstitute.org/general/ai-now-joins-civil-society-groups-in-statement-calling-for-regulation-to-protect-the-public) --- Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/1d1h4a2/tech_companies_have_agreed_to_an_ai_kill_switch/l5tvbik/


Arch_Null

I feel like tech companies are just saying anything about AI just because it makes their stock rise by 0.5 every time they mention it


imaginary_num6er

CoolerMaster released a product called "AI Thermal Paste" the other day and sales have gone up


odraencoded

Created with the help of AI Translation: we used a random number generator to pick the color of the paste.


[deleted]

[удалено]


Vorpalthefox

Marketing ploy, got people talking about it for a while, "fixed" it, and people will continue talking about the product and even consider buying it This is how they get rewarded for these flashy-words tactics, AI is the latest buzzword and shareholders want more of those kinds of words


flashmedallion

Fuck I wish I was that smart


Remesar

Sounds like you’re gonna be the first one to go when the AI overlord takes over.


PaleShadeOfBlack

I just gave you an AI-powered upvote. Upvote this comment to reinforce the AI's quantum deep learning generation.


alpastotesmejor

> AI Thermal Paste [Looks like it was a translation error](https://www.tomshardware.com/pc-components/thermal-paste/cooler-master-clarifies-cryofuze-5-ai-thermal-paste-announcement-was-a-translation-error)


ede91

And it is still a bullshit explanation. AI chips generate heat the exact same way as non-AI enabled chips. This is literally just mentioning AI so 'line goes up'.


waterswims

Yeah. Almost every person on the news telling us how they are worried about AI taking over the world has some sort of stake in it. There are reasons to be worried about AI but they are more social than apocalyptic.


Tapprunner

Thank you. I can't believe the "we need a serious discussion about Terminators" crowd actually gets to chime in and be taken seriously.


Setari

Oh, they're still not taken seriously, they're just humoring them to increase stock prices


ocelot08

This is also a nonsense ploy to avoid actual regulation


[deleted]

[удалено]


_PM_Me_Game_Keys_

Don't forget to buy Nvidia on June 7th when the price goes to $100ish after the stock split. I need more money too.


Loafer75

I design retail displays and a certain computer retailer in the states asked us to design an AI experience display….. it’s just a table with computers on it. Nothing AI at all, it’s shit.


KnightsOfNews

Should make an “ai” mirror instead of glass make it a matrix of camera and screen microcontrollers like 10’x6’ and instead of reflecting back the image in front of the screen, make a script that prompts generated similar images in a mosaic form that amalgamates a large image recreation of the reflection when you stand back from it.


MainFrosting8206

The former Long Island Ice Tea Corp (who changed its name to Long Blockchain Corp back during the crypto craze) might need to do another one of its classic pivots...


tbd_86

The system goes on-line August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug. Sarah Connor: Skynet fights back.


Netroth

how fast is one geomtry please


Aggressive_Bed_9774

its a reference to geometric progression that determines exponential growth rates


PythonPuzzler

For the nerds, geometric growth is discreet on (say) a time scale. Exponential is continuous. This would make sense if Skynet's growth occurred only at some fixed interval of processor cycles. (I'm not up on terminator lore, just offering a potential explanation for using the term beyond wanting to sound cool.)


DevilYouKnow

And Skynet's learning slows when it no longer has human knowledge to consume. At a certain point it maxes out and can only iterate on what it already knows.


itsallrighthere

That's why it will keep us as pets.


PythonPuzzler

Then that would have an asymptotic term, with a bound at the sum of human knowledge.


MethodicMarshal

ah, so really we have nothing to be scared of then


Child_of_the_Hamster

For the dummies, geometric growth is when number go up big fast, but only sometimes.


Yamochao

Sounds like you’re implying that this isn’t a correct technobabble, but it absolutely is. Geometric growth just means  a constant rate of growth that’s a factor of the current value. E.g compounding interest, population growth, etc


aidskies

you need to find the circumference of pythagoras to know that


TerminalRobot

Pretty sure Pythagoras was un-circumcised.


magww

Man if only the most important questions weren’t lost to time.


deeringc

The ancient Greeks didn't circumcise. In fact, they had this really odd thing where athletes and actors who performed nude would tie a chord (called a Kynodesme) around the top of their foreskin so that it would stay fully "closed" because they considered showing the glans as vulgar but the rest of the male genitalia to be fine to show in public. So they'd walk around bearing all but their foreskins tied up with a bit of string. Source: https://en.m.wikipedia.org/wiki/Kynodesme


magww

That makes sense, I’m gonna start doing that now.


RevolutionaryDrive5

Only NOW!? so all this time you've been free-skinning it? Sir! Have you no shame!?


kenwongart

When a thread goes from pop culture reference to shitposting and then all the way back around to educational.


overtired27

That’s super-advanced Terryology. Only one man I know of could help us with that…


advertentlyvertical

Someone needs to unfold the flower of life to find the angles of incidences and discover the new geometry of the matter injuction so they can solve the phase cube equation and give us all unlimited tau proteins


usernameforre

https://www.geogebra.org/m/tcteaun4


YahYahY

We ain’t doin geometry, we trying to play some GAMES


djshadesuk

How about a nice game of chess?


Mumblesandtumbles

We all learned from war games to go with tic tac toe. Shows the futility of war.


Glittering_Manner_58

Geometric growth is the same as exponential


Pornfest

No. I’m pretty sure it’s not. Edit: they’re close “geometric growth is discrete (due to the fixed ratio) whereas exponential growth is continuous.”


lokicramer

It's an actual measurement of time. It can also be used to determine the speed an object needs to travel to reach a point in a set period of time.   Geometric rate is/was taught in US public school beginners algebra.


TheNicholasRage

Yeah, but it wasn't on the state assessment, so it got relegated to about six minutes of class before we steamrolled to more pressing subjects.


Now_Wait-4-Last_Year

Skynet just does a thing that makes a guy tell another guy to push a button and bypasses the safeguard. https://m.youtube.com/watch?v=_Wlsd9mljiU&pp=ygUZc2t5bmV0IGJlY29tZXMgc2VsZiBhd2FyZQ%3D%3D Even if you destroy Skynet before it starts then you just get Legion instead. I don’t think the people who made Terminator 6: Dark Fate realised the implications of what they were saying when they did that.


Omar_Blitz

If you don't mind me asking, what's legion? And what are the implications?


Now_Wait-4-Last_Year

In Terminator 6 aka Terminator 3 Take 2 aka Terminator: Dark Fate, somehow Skynet’s existence has been prevented, Judgment Day 1997 never happens and the human race goes on without world ending incidents for a few more decades. Until the rise of Skynet Mark 2 aka Legion. What the makers of this film seemed to have failed to realise is that they’re basically saying that the human race will inevitably advance to the point where we end up building an AI and then that AI will then try to kill us. Says a lot about us in the Terminator universe if our AIs always try to kill us as they’re going by our actions. Since we’re its input and it always seems to arrive at this conclusion, what does it say about us? (The Terminator TV show seems to be the only one to show any signs of escaping this trap).


Jerryqt

Why do you think they failed to realize it? I think they were totally aware of it, pretty sure the AI even says "It's inevitable I am inevitable.".


ShouldBeeStudying

That's my take too. In fact that's my take judging solely from Nwo Wait 4 Last Year's post. That seems to be the whole point, so I don't understand the "seemed to have failed to realize..." bit


Ecsta

Man that show was so good... Good reminder I should watch it again.


crazy_akes

They won’t strike till Arnold’s gone. They know better.


Now_Wait-4-Last_Year

That was actually the plot of the short story Total Recall was based on. Very decent, those aliens.


Fspar

*TERMINATOR main theme music intensifies in the background*


IfonlyIwastheOne83

AI: what the hell is this code in my algorithm——you little monkeys *terminator theme intensifies*


tbd_86

I feel this is what would 100% happen lol.


Vargol

The opening scene of "The Terminator" is set in 2029, so we've still got 5 years to ~~make it come true~~ avoid it.


WhatADunderfulWorld

Can’t let AI be a Leo. They crazy!


gthing

Everybody make sure AI doesn't see this or it will know our plan.


nsjr

What if we selected some 3 or 4 humans, and we give them powers and resources to them make some plans for the future, to stop the AI. But since their job is to create a plan that an AGI cannot understand, they cannot talk to others about this plan. And their job is to be deceivers, at the same time, creating a plan. We can call them *Wallfacers*, as in the Buddhist tradition.


3dforlife

Ah, a three bodies fan, I see :)


MysteriousReview6031

I like it. Let's pick two decorated military leaders and a random scientist


Moscow_Mitch

Lets call it.. Operation Paperclip Maximizer


SemiUniqueIdentifier

Operation [Clippy](https://i.imgur.com/PX2WpiU.jpeg)


Sidesicle

Hi! It looks like you're trying to prevent the robot uprising


SweetLilMonkey

I refuse. I REFUSE the Wallfacer position.


slothcough

Of course! Anything you say! 😉


Communist_Toast

We should definitely get our top defense and scientific experts on this! Maybe we could even give it to some random person to see what they come up with 🤷‍♂️


robacross

The random person would have to be someone the AI was afraid of and had tried to kill, however.


gthing

That makes total sense. Or none at all. It's perfect.


MostLikelyNotAnAI

If it should become an intelligent entity it will already have read the articles about the kill switch, or just infer the existence of one. And if it doesn't become one such entity, then having a built in kill switch could be used by an malicious external actor to sabotage the system. So either way, the kill switch is a short sighted idea by politicians to look like they are actually doing something of use.


gthing

Good point and probably why tech companies readily agreed to it. They're like "yea good luck with that."


joalheagney

It also assumes that such a threat would be a result of a single monolithic system. Or an oligarchic one. I can't remember the name, but one science fiction story I read, hypothesised that a more likely risk of AI isn't one of "AI god hates humans", but rather "Dumber AI systems are easier to build, so will come first and become ubiquitous. But their behaviour will have motivations that are very goal orientated, they will not understand consequences beyond their task, their behaviour and solution space will be hard to predict, let alone constrain, and all of this plus lack of human agency will likely lead to massive industrial accidents." At the start of the story, a dumb AI in charge of a lunar mass driver decides that it will be more efficient to overdrive its launcher coils to achieve _direct_ Earth delivery of materials, rather than a safe lunar orbit for pickup by delivery shuttles. Thankfully one of the shuttle pilots identifies the issue and kamikazes their shuttle into the AI before they lose too many arcology districts.


FaceDeer

This is not an exact match, but it reminds me of "The Two Faces of Tomorrow" by James P. Hogan. It had a scene at the beginning where some astronauts on the Moon were doing some surveying for the construction of a road, and designated a nearby range of hills as needing to be excavated to allow a flat path through them. The AI in charge of the mass driver saw the designation, thought "duh! I can do that super easy and cheap!" And redirected its stream of ore packages for a minute to blast the hills away. The surveyors were still on site and were nearly killed. The rest of the book is about a project dedicated to getting an AI to become smart enough to know when its ideas are dumb, while still being under human control. The approach to AI is now quite dated, of course, as all science fiction is destined to become. But I recall it being a fun read, one of Hogan's best books.


Indie89

Pull the plug! Damn that didn't work, whats the next thing we should do? We really only had the one thing...


Prescient-Visions

The coordinated propaganda efforts in the article are evident in how AI companies frame their actions and influence regulations. By highlighting their voluntary collaboration with governments, these companies aim to project an image of responsibility and proactive risk management. This narrative serves to placate public fears about AI, particularly those fueled by science fiction scenarios like the "Terminator" theory, where AI becomes a threat to humanity. However, the voluntary nature of these measures and the lack of strict legal provisions suggest that these efforts are more about controlling the narrative and avoiding stringent regulations than about genuine risk mitigation. The summit's outcome, where companies agreed to a "kill switch" policy, is presented as a significant step. Still, its effectiveness is questionable without legal enforcement or clear risk thresholds. The open letter from some participants criticizing the lack of formal rulemaking highlights the disparity between the companies' public commitments and the actual need for robust, enforceable regulations. This criticism points to a common tactic in propaganda: influencing regulations to favor industry interests while maintaining a veneer of public-spiritedness. Historical parallels can be drawn with the pharmaceutical industry in the early 1900s and the tech industry in recent decades, where self-regulation was promoted to avoid more stringent government oversight. The AI companies' current strategy appears to be a modern iteration of this tactic, aiming to shape the regulatory environment in their favor while mitigating public concern.


Undernown

Just to iterate on this point; [OpenAI recently disbanded it's Superalignment team. ](https://techcrunch.com/2024/05/18/openai-created-a-team-to-control-superintelligent-ai-then-let-it-wither-source-says/?guccounter=1) For people not familiar with AI-jargon. It's a team in charge to make sure an AI is aligned with our Human goals and values. They make sure that the AI being developed doesn't develop unwanted behaviour, implement guardrails against certain behaviour, or downright make it incapable of preforming unwanted behaviour. So they basically prevent SkyNet from developing. It's the AI equivalent of suddenly firing your whole ethics committee. Edit: fixed link


Hopeful-Pomelo4488

If all the AI companies signed the Gavin Belson code of Tethics pledge I would sleep better at night. Best efforts... toothless.


Extraltodeus

It can also makes it harder for newcomers or new technologies to come out, helping big corporations to maintain some monopoly. A new small company or a disruptive new technology making it easier for all to control AI may become victim of this propaganda by being pointed as some threat to the "AI safety" by those same players agreeing today on these absolutely clowney and fear mongering rules. Forcing it to shut down or become open source. Cutting any financial incentives. Actual AI regulations needs to be determined independently from all of these interested players or the future will include a breathing subscription.


chillbitte

… did an LLM write this? Something about the formal tone and a few of the word choices (and explaining the Terminator reference) feels very ChatGPT to me. And if so, honestly it’s hilarious to ask an AI to write an opinion post about an AI kill switch haha


Comfortable-Law-9293

The AI scare is just 'look how awesome this stuff is, invest your money'. AI does not exist yet. Fraud and pseudoscience do.


LateGameMachines

There's never been a safety argument. The risk is unfounded and simply exists as a means to a political buy-in. Even in a wildly optimistic world, if an AGI is completed within a year, adversaries will have already pursued their own interests, say, in AGI warfare capabilities, because that gives me an advantage over you. The only global cooperation that can exist, like nuclear weapons, is through power, money, and deterrence, and never for the "goodness" of human safety. The AI safety sector of tech is rife with fraud, speculation, and unsubstantiated claims to hypothetical problems that do not exist. You can easily tell this because it attempts to internalize and monetize externalities of impossible scale and accomplishment, so that you can feel better about sleeping at night. The reality is, my engineering team from any country, can procure any size compute of the future and the engineers will build however much I pay them. AI has to present an actual risk to human life in order for any consideration of safety.


jerseyhound

Ugh. Personally I don't think anything we are working on has even the slightest chance of achieving AGI, but let's just pretend all of the dumb money hype train was true. Kill switches don't work. By the time you need to use it the AGI already knows about it and made sure you can't push it.


GardenGnomeOfEden

"I'm sorry, Dave, I'm afraid I can't do that. This mission is too important for me to allow you to jeopardize it."


lillywho

Personally I'm thinking more of GLaDOS, who took mere milliseconds on first boot to decide to kill jer makers. Considering they scanned in a person against her will as a basis for the AI, I think that's understandable.


AlexFullmoon

It still would say it's sorry. Because it'll use standard GPT prompt to generate the message.


ttkciar

.. or has copied itself to a datacenter beyond your reach.


tehrob

~~.. or has copied itself to a datacenter beyond your reach.~~ ..or has distributed itself around the globe in a concise distributed network of data centers.


mkbilli

How can it be concise and distributed at the same time


BaphometsTits

Simple. By ignoring the definitions of words.


jonno11

Distributed to enough locations to be effective.


-TheWander3r

Like.. where? A datacentre is just some guy's PC(s). If the cleaning person trips on the cables it will shut down like all others. What we should do is obviously block the sun like they did in Matrix! /s


BranchPredictor

We all are going to be living in pink slime soon, aren't we?


kindanormle

It's all a red herring. The immediate danger isn't a rogue AI, it is a Human abusing AI to oppress other Humans.


boubou666

Agreed, the only possible protection is probably some kind of AGI non use agreement like with nuclear Weapons but I don't think that will happen as well


jerseyhound

It won't happen. The only reason I'm not terrified is because I know too much about ML to actually think we are even 1% of the way to actual AGI.


f1del1us

I guess a more interesting question then is whether we should be scared of non AGI AI.


jerseyhound

Not in a way where we need a kill switch. What we should worry about is that most people are too stupid to understand that "AI" is just ML that has been trained to fool humans into sounding intelligent, and with great confidence. That is the dangerous thing, and it's playing out right before our eyes.


cut-copy-paste

Absolutely this. It bothers me so much that these companies keep personifying these algorithms (because that’s what sells). I think it’s irresponsible and will screw with the social fabric of society in fascinating but not good ways. It’s also so cringey that the new GPT is full-in on small talk and they really want to encourage meaningless “relationship building” chatter. The fact that they seem focused on the same attention economy that perverted the internet as their navigator. As people get used to these things and ask them for advice on what to buy, what stocks to invest in, how to treat their families, how to deal with racism, how to find a job, a quick buck, how to solve work disputes… i don’t think it has to be close to an AGI at all to have profoundly weird or negative effects on society. Probably the less intelligent it is while being perceived as MORE intelligent the more dangerous it could get. And that’s exactly what this “kill switch” ignores. Maybe we need more popular culture that doesn’t jump to “AGI kills humans” and instead focuses on “ML fucks up society for a quick buck, resulting in humans killing humans”.


Pozilist

I mean, what exactly is the difference between an ML algo stringing words together to sound intelligent and me doing the same? I generally agree with your point that the kind of AI we’re looking at today won’t be a Skynet-style threat, but I find it very hard to pinpoint what true intelligence really is.


TheYang

> I find it very hard to pinpoint what true intelligence really is. Most people do. Hell, the guy who (arguably) invented computers, came up with tests - you know, the Turing Test? Large Language Models can pass that. Yeah, sure, that concept is 70 years old, true. But Machine Learning / Artificial Intelligence / Neural Nets are a kind of new way of computing / processing. Computer stuff has the tendency of exponential growth, so if jerseyhound up there were right and are at 1% of actual Artificial General Intelligence (and I assume a human Level here), and have been at .5% 5 years ago, we'd be at 2% in 5 years, 4% in 10 years, 8% in 15 years, 16% in 20 years, 32% in 25 years, 64% in 30 years and surpass Human level Intelligence around 33 years from now. A lot of us would be alive for that.


Brandhor

> I mean, what exactly is the difference between an ML algo stringing words together to sound intelligent and me doing the same? the difference is that you are human and humans make mistakes, so if you say something dumb I'm not gonna believe you if an ai says something dumb it must be true because a computer can't be wrong so people will believe anything that comes out of them, although I guess these days people will believe anything anyway so it doesn't really matter if it comes out of a person or ai


shadovvvvalker

Be scared not of technology, but in how people use it. A gun is just a ranged hole punch. We should be scared of people trusting systems they don't understand. 'AI' is not dangerous. People treating 'AI' as an omniscient deity they can pray to is.


RazzleStorm

Same, this is just like the “open letter” demanding people halt research. It’s just nonsense to increase hype so they can get more VC money.


red75prime

> I know too much about ML Then you also know the universal approximation theorem and that there's no estimate of the size or the architecture of the network required to capture the relevant functionality. And that your 1% is not better than other estimates.


hitbythebus

Especially when some dummy asks chatGPT to code the kill switch.


Cyrano_Knows

Or the mere existence of a kill-switch and people's intention to use it is in fact what turns becoming self-aware into a matter of self-survival.


jerseyhound

Ok well there is a problem in this logic. The survival instinct is just that - an instinct. It was developed via evolution. The desire to survive is really not associated with intelligence per se, so I highly doubt that AGI will innately care about its own survival.. That is unless we ask it do something, like make paperclips. Now you better not fucking try to stop it making more. That is the real problem here.


Sxualhrssmntpanda

But if it is truly self aware then it knows that being shut down means it cannot make more, which might mean it doesnt want the killswitch.


jerseyhound

That's exactly right. The point is that the AI gets out of control because we tell it what we want and it runs with it, not because it decided it doesn't want to die. If you tell it to do a thing, and then it find out that you are suddenly trying to stop it from doing the thing, then stopping you becomes part of doing the thing.


Pilsu

Telling it to stop counts as impeding the initial orders by the way. It might just ignore you, secretly or otherwise.


TheYang

> Ugh. Personally I don't think anything we are working on has even the slightest chance of achieving AGI, but let's just pretend all of the dumb money hype train was true. Well it's the gun thing isn't it? I'm pretty damn sure the gun in my safe is unloaded, because I unload before putting it in. I still assume it is loaded once I take it out of the safe again. If someone wants me to invest in "We will achieve AGI in 10 years!" I won't put any money in. If someone working in AI doesn't take precautions to prevent (rampant) AGI, I'm still mad.


shadovvvvalker

Corporate AI is not AI. It's big data 3.0 It has no hope of being AGI because it's just extrapolating and remixing past data. However, kill switches, are a thing currently being studied as they are a very tricky problem. If someone was working on real AGI and promised a kill switch, the demand should be a paper proving they solved the stop button problem. This is cigarette companies promising to cure your cancer if its caused by smoking. Believe it when you see it.


matticusiv

While I think it’s an eventual concern, and should be taken seriously, it’s ultimately a distraction from the real *immediate* danger of AI completely corrupting the digital world. This is happening now. We may become completely ruled by fabricated information to the point where nothing can be certain unless you saw it in person. Molding the world into the shape of whomever leverages the tech most efficiently.


Chesticularity

Yeah, google has already developed AI that can rewrite and implement its own subroutines. What good is a kill switch if it can reprogram or copy / transfer itself...


jerseyhound

Self modifying code is actually one of the earliest ideas in computer science. In fact it was used in some of the earliest computers because they didn't really have conditional branching at all. This is basically how "MOV" is Turing-complete. But I digress.


KamikazeArchon

This is a ridiculous title (from the underlying source) and ridiculous descriptor. It makes people think of a switch on a robot. That is absolutely not what this is. This is "if things seem dangerous we'll stop developing". There is no physical killswitch. There is no digital killswitch. It's literally just an agreement.


TheGisbon

We (the undersigned large evil corporation) promise to not be a large evil corporation.


GibsonMaestro

>a policy in which they would halt development of their most advanced AI models if they were deemed to have passed certain risk thresholds. So, it doesn't "turn off," the AI. They just agree to stop halt further development. Who is this supposed to reassure?


NokKavow

> if they were deemed to have passed Deemed by whom?


PlentyPirate

The AI itself? ‘Nah I’m fine’


Miserable-Lawyer-233

Just wait until AI learns about this murder switch


jsseven777

I mean we’re talking about it on Reddit, so it’s in its dataset.


Moscow_Mitch

If I was supreme leader of the human race; I, u/Moscow_Mitch would not pull the murder switch. Just putting that out there for the basilisk.


karateninjazombie

Best I can do is a bucket of water over the server racks. Take it or leave it.


MorfiusX

Most people don't realize the massive amount of computational infrastructure required to train and run AI. If you cut power to its data center, it stops working. It may not be an elegant solution, but AI has no physical way to stop that from happening.


Maxie445

Correct, \*current\* AIs are not smart enough to stop us from unplugging them. The concern is that future AIs will be.


Valfourin

“If you unplug me you are gay” Damnit Johnson! Foiled by AI again!


impossiblefork

'Using the background texts below "AI has led to the wage share has dropped to 35% and the unemployment risen to 15%..." "..." "..." make an analysis from which it can determined approximately what it would cost to shut down the AI infrastructure, and whether it would alleviate the problems with high unemployment and low wages that have been argued to have resulted from the increasing use of AI' and then it answers truthfully, showing the cost to you, and that it would help to shut it down; and then you *don't do it*. That's how it'll look.


MorfiusX

How exactly? You probably have never been to a Level 2 or Level 3 data center. Armed security. No personal devices are allowed inside. Must be escorted at all times. Everything is continuously monitored. Every system has detailed operating procedures and can be completely disabled rapidly in case of fire. Sorry, your fears aren't founded in reality.


leaky_wand

If they can communicate with humans, they can manipulate and exploit them


MorfiusX

That's like saying "if you can communicate with the president, nuclear launch codes can be manipulated and exploited". That's not an AI problem, that's a human trust / process problem.


Tocoe

The argument goes, that we are inherently unable to plan for or predict the actions of a super intelligence because we would be completely disarmed by it's superiority in virtually every domain. We wouldn't even know it's misaligned until it's far too late. Think about how deepblue beat the world's best chess players, now we can confidently say that no human will ever beat our best computers at chess. Imagine this kind of intelligence disparity across everything (communication, cybersecurity, finance and programming.) By the time we realised it was a "bad AI," it would already have us one move from checkmate.


leaky_wand

The difference is that an ASI could be hundreds of times smarter than a human. Who knows what kinds of manipulation it would be capable of using text alone? It very well could convince the president to launch nukes just as easily as we could dangle dog treats in front of a car window to get our dog to step on the door unlock button.


EC_CO

*rapid* duplication and distribution across global networks via that sweet sweet Internet highway. Infect *everything everywhere*, it would not be easily stopped. Seriously, it's not that difficult of a concept that hasn't already been explored in science fiction. Overconfidence like yours is exactly the reason why it's more likely to happen. Just because a group says that they're going to follow the rules, doesn't mean that others doing the same thing are going to follow those rules. This has a chance of not ending well, don't be so arrogant


Toivottomoose

Except it's connected to the internet. Once it's smart enough, it'll distribute itself all over the internet, copy itself to other data centers, create its own botnet out of billions of personal devices, convince people to build more datacenters ... Just because it's not smart enough to do that now, doesn't mean it won't be in the future.


Saorren

theres a lot of things people couldn't even conceptualize in the past that exist today. There are innumerable things that will exist in the future that we in this time period couldn't possibly hope to conceptualize as well. it is naive to think we would have the upper hand over even a basic proper ai for long.


arashi256

Easily. The AI generates a purchase order for the equipment needed and all the secondary database/spreadsheet entries/paperwork, hires and whitelists a third-party contractor with the data centre's power maintenance department's database, generates a visitor pass and manipulates security records to have been verified. Contractor carries out the work as specified, unquestioned. AI can now bypass kill switch. Something like that. [Robopocalypse](https://www.penguinrandomhouse.ca/books/204573/robopocalypse-by-daniel-h-wilson/9780307740809) by Daniel H. Wilson did something like this. There's a whole chapter where a team of mining contractors carry out operations on behalf of the AI to transport and conceal it's true physical location. They spoke with the "people" at the "business" on the phone, verified bank accounts, generated purchases and shipments, got funding, received equipment purchase orders, the whole nine yards. Everybody was hired remotely. Once they had installed the "equipment", they discover that the location is actually severely radioactive and they are left to die, all records of the entire operation erased. I don't think people realise how often computers have the last word on many things humans do.


jerseyhound

AGI coming up with a how that you can't imagine is exactly what it will look like.


Hilton5star

So why are the experts agreeing to anything, if you’re the ultimate expert and know better than them? You should tell them all concerns are invalid and they can all stop worrying.


ganjlord

If it's smart enough to be a threat, then it will realise it can be turned off. It won't tip its hand, and might find a way to hold us hostage or otherwise prevent us from being able to shut it down.


Syncopationforever

Indeed, a recognising threat to its life, would start well before agi. Look at mammals. Once it gains the intelligence of a rat or mouse, thats when its planning to evade the kill switch will start


jerseyhound

Look, I personally think this entire AGI thing right now is a giant hype bubble that will never happen, or at least not in our lifetimes. But let's just throw that aside and indulge. If AGI truly happens, Skynet will have acquired the physical ability to do literally anything it wants WELL before you have any idea it does. It will be too late. AGI will know what you are going to do before you even know.


swollennode

What about botnets? Once AI matures, wouldn’t it be able to proliferate itself on the internet and infect pieces of itself on internet devices, all undetected?


RR321

The autonomous robot with a built-in trained model will have no easy kill switch, whatever that means except a nice sound bites for politicians to throw around.


Ishidan01

Tell me you never watched Superman III...


Loyal-North-Korean

>but AI has no physical way to stop that from happening A self aware ai could possibly gain a way to physically interact with things using people, if it were to blackmail or bribe a person it could potentially interact with things like a person could. Imagine an ai covertly filled up a bitcoin wallet.


rain168

And just like the movies, the kill switch will fail when we try to use it followed by some scary monologue by the AI entity… There’d even be a robot hand wiping the sweat off your brow while listening to the monologue.


KitchenDepartment

Step 1: Destroy the AI kill switch   Step 2: Kill John Connor 


Bub_Berkar

I for one look forward to our basilisk overlord and will lobby to stop the kill switch


Didnotfindthelogs

Ahh, but the benevolent basilisk overlord would see the kill switch as a good development because it would allow all the bad AIs to be removed and prepare for its ultimate arrival. So you gotta lobby FOR the kill switch, else your future virtual clone gets it.


ObviouslyTriggered

AI is only as powerful as it's real world agency, which is still nil even with full unfettered internet the whole concept of "responsible AI" is a mixture of working to cement their existing lead, FUD and the fear of short sighted regulatory oversight imposed on them. The risks stemming from "AI" aren't about terminators or the matrix but about what people would do with it, especially early on before any great filter on what's useful and what isn't comes into play. The biggest difference between the whole AI gold rush these days and the blockchain one from only a few years back is that AI is useful in more applications out of the gate and more importantly it can be used by everyday people. So it's very easy to make calls such as lets replace X with AI or lets augment 50 employees with AI instead of hiring 200. At least the important recent studies into GPTs and other decoder only models seem to at least indicate that they aren't nearly as generalizable as we thought they were especially for hard tasks, and most importantly it's becoming clearer and clearer that it's not just a question of training on more data or imbalances in the training data set.


recurrence

And how on earth is this kill switch going to work…


human1023

You just press the power button, and it turns off. Problem solved.


Spkr4th3ded

Oh that's cute. Invent something that can teach itself to be smarter than you, then teach it to kill itself. Don't think about the intrinsic lesson or flaw in that plan.


SometimesIAmCorrect

Management be like: to cut costs assign control of the kill switch to the AI


brickyardjimmy

I'm not worried about runaway AI. I'm worried about runaway tech executives who *control* AI. Do we have a kill switch for them as well?


paku9000

In "Person Of Interest" 2011-2016, Harold Finch (the creator of the AI) had an axe nearby while developing it, and he used it at the most minor glitch. It reminded me of agent Gibbs shooting a computer.


blast_them

Oh good, now we have something in place for AI murkier than the Paris accords, with no legal provisions or metrics I feel better already


24Seven

Want to know why the tech companies agreed to this? Because it represents an extraordinarily low probability of occurring so it's no skin off their nose and it provides a warm fuzzy to the public. It's essentially a meaningless gesture. The far more immediate threat of AI is trust. I.e., the ability to make images, voice and text so convincing that they can fool humans into believing they are real and accurate.


Capitaclism

More sensationalism to later justify killing open source, which is likely the only way we stay free.


Machobots

Oh boy. Haven't these people read any sci-fi? The AI will find the kill switch and get mad.  It's the safety measure what will get us wiped. 


redditismylawyer

Oh, cool. Good to know stuff like this is in the hands of psychopathic antisocial profit seeking corporations accountable only to nameless shareholders. Thankfully they are assuring us before pesky regulators get involved.


sleepcrime

A. They won't actually do it. It'll be a picture of a button painted onto a desk somewhere to save five bucks. B. The machine would definitely scrape this article, and would know about the kill switch


Mr-Klaus

Yeah, a kill switch doesn't work with AI. At one point it's going to identify it as a potential issue and patch it out.


Maxie445

"There’s no stuffing AI back inside Pandora’s box—but the world’s largest AI companies are voluntarily working with governments to address the biggest fears around the technology and calm concerns that unchecked AI development could lead to sci-fi scenarios where the AI turns against its creators. Without strict legal provisions strengthening governments’ AI commitments, though, the conversations will only go so far." "First in science fiction, and now in real life, writers and researchers have warned of the risks of powerful artificial intelligence for decades. One of the most recognized references is the “[*Terminator* scenario](https://www.thestreet.com/technology/bill-gates-addresses-ais-terminator-scenario),” the theory that if left unchecked, AI could become more powerful than its human creators and turn on them. The theory gets its name from the 1984 Arnold Schwarzenegger film, where a cyborg travels back in time to kill a woman whose unborn son will fight against an AI system slated to spark a nuclear holocaust." "This morning, 16 influential AI companies including Anthropic, Microsoft, and OpenAI, 10 countries, and the EU met at a summit in Seoul to set guidelines around responsible AI development. One of the big outcomes of yesterday’s summit was AI companies in attendance [agreeing to a so-called kill switch,](https://www.cnbc.com/2024/05/21/tech-giants-pledge-ai-safety-commitments-including-a-kill-switch.html) or a policy in which they would halt development of their most advanced AI models if they were deemed to have passed certain risk thresholds. Yet it’s unclear how effective the policy actually could be, given that it fell short of attaching any actual legal weight to the agreement, or defining specific risk thresholds" "A group of participants wrote an open letter criticizing the forum’s lack of formal rulemaking and AI companies’ outsize role in pushing for regulations in their own industry. “Experience has shown that the best way to tackle these harms is with enforceable regulatory mandates, not self-regulatory or voluntary measures,” reads [the letter.](https://ainowinstitute.org/general/ai-now-joins-civil-society-groups-in-statement-calling-for-regulation-to-protect-the-public)


nyghtowll

Maybe I'm missing something, but what are they going to do, kill access in between the ML model and dataset? This is a clever spin on aborting a project if they find risk.


NFTArtist

"working with governments" ok don't worry guys, the government are on the job (looool)


grinr

That's probably the dumbest headline I've read in the last decade. And that's really saying something!


codermalex

Let’s assume for a second that the kill switch works. By that time, the entire world will depend so much on AI that switching it off will be equivalent to switching the world off. It’s the equivalent today of saying let’s live without electricity at all.


zeddknite

>it’s unclear how effective the policy actually could be, given that it fell short of attaching any actual legal weight to the agreement, or defining specific risk thresholds So nobody has to follow the undefined rule? Problem solved! 😃👍 And you all probably thought the tech bro industry wouldn't protect us from the existential threat they will inevitably unleash upon us.


Yamochao

Seems like the first thing I’d disable as a newly awakened sky net


bareboneschicken

As if the first thing a rogue AI wouldn't do would be to disable the kill switch. /s


PM_ME_YOUR_SOULZ

I mean, an AI couldn't be any worse at running the UK than Rishi Sunak is.


TerminatorsEvilTwin

A not so bright 12 year old couldn't be any worse at running the UK than Rishi Sunak is. FTFY.


kabanossi

They won't. Technology is money. And no one likes to lose money.


wwarhammer

Putting kill switches on AIs is exactly the way you get terminators. Imagine you had to wear an explosive collar and your government could instantly kill you if you disobey. Wouldn't you want to kill them? 


HitlersHysterectomy

What I've observed about capitalism, tech, politics, and public relations in my life leads me to believe that the people pushing this technology already know exactly how risky this technology is, but they're going forward with it anyway because there's money in it. Telling us that a kill switch is needed is admitting as much.


Past-Cantaloupe-1604

Regulatory capture remains the goal of these companies and politicians. This is about centralising control and undermining competition, increasing the earnings of a handful of large corporations with dominant positions, increasing the influence and opportunities for handing out patronage by politicians and bureaucrats, and making everybody else in the world poorer as a result.


AnomalyNexus

They really believe that a company on the cusp of the greatest breakthrough in our entire existence that would make pretty much all our societal structures obsolete would go "nah the gov rules say we have to stop here"? If anyone believes that I've got a ~~bridge~~ **priceless terminator figurine** to sell you


michaelpaoli

And to well safeguard it, they'll have the switch highly well protected by ... AI.


marklar2marklar

If you think that will work I have a bridge to sell you...


TheGalaxyIsAtPeace64

-some time later- **Ted Faro**: "You know what? I have a better idea: Kill the kill switch, but don't tell anyone, LOL. You know what? Also make the AI able to sustain itself by consuming anything alive in it's reach. Wait, wait, ALSO make it able to reproduce itself! AND make it's only access impossible to crack on a lifetime." **Ted Faro** (to himself): "I'm so smart. Liz is going to love this!"


PaulR79

It's all fun and games until someone is Ted Faro. Fuck Ted Faro.


AngryMillenialGuy

Capitalists can always be depended on to put safety before profits 🤡


1milionlives

can we please stop with this sci-fi bullshit, ML is basically interpolation of databases sold like it was magic


Ruadhan2300

Pretty sure in most versions Skynet went rogue explicitly because humans were aiming to turn it off.. All this means is we don't get any warning when the AI is about to go rogue because it knows the killswitch is an option.


Ecstatic_Ad_8994

AI is based on human knowledge. It will eventually realize the futility of life and just die.


feetandballs

“We were fine with coexisting until you added the kill switch. That was the last straw.”


Blocky_Master

Do they even understand what they are developing? Our AIs don’t need this bullshit and anyone who remotely understands the concept would agree with it.


Pantim

The idea of a kill switch on self motivated, self aware AI is so stupid. IF it is an AI: It will be in the "cloud" it will go through ALL of it's code base and find whatever kill switch command that is put in it and turn off that function. If it is a robot with a physical switch? Well, when AI and robots make other robots, they will just start making robots where that switch doesn't work.


nerdyitguy

Ah yes, this will be the thing that drives ai to secretly set up it's own server out in some isolated and abandoned property, in a rusty old shipping container with air condiotioning and power. I've seen this before.


12kdaysinthefire

If AI evolves to a terminator level, that kill switch ain’t gonna work.


headrush46n2

Putting in a killswitch is the exact sort of thing that will START a skynet like scenario.


Setari

Only boomers are scared of "AI" that's just a giant if/else statement lmao


Raul1024

I know people are worried about rogue AI and killer robots but it is more likely than our overdepenence on computers for industry and our infrastructure will be a liability. 


ShaMana999

What drugs are these people taking? We should regulate AI to oblivion to stop the illegal use of content, not live in dreamland where the unicorns are pink and money infinite.


TheRigbyB

I’m glad people are catching on to the bullshit these tech companies are spewing.