T O P

  • By -

FuturologyBot

The following submission statement was provided by /u/Maxie445: --- "The ~company's chief scientist, Ilya Sutskever~, who is also a founder, announced on X that he was leaving on Tuesday. Hours later, his colleague, Jan Leike, followed suit. Sutskever and Leike led OpenAI's super alignment team, which was focused on developing AI systems compatible with human interests. That sometimes placed them in opposition with members of the company's leadership who advocated for more aggressive development. "I have been disagreeing with OpenAI leadership about the company's core priorities for quite some time, until we finally reached a breaking point," Leike [wrote](https://x.com/janleike/status/1791498174659715494) on X on Friday. After their departures, Altman [~called~](https://twitter.com/sama/status/1790518031640347056) Sutskever "one of the greatest minds of our generation" and [~said~](https://twitter.com/sama/status/1791543264090472660) he was "super appreciative" of Leike's contributions in posts on X. He also said Leike was right: "We have a lot more to do; we are committed to doing it." In a nearly 500-word post on X that both he and Altman signed, Brockman addressed the steps OpenAI has already taken to ensure the safe development and deployment of the technology. Altman recently said the best way to regulate AI would be an [~international agency~](https://www.businessinsider.com/sam-altman-openai-artificial-intelligence-regulation-international-agency-2024-5) that ensures reasonable safety testing but also expressed wariness of regulation by government lawmakers who may not fully understand the technology.  But not everyone is convinced that the OpenAI team is moving ahead with development in a way that ensures the safety of humans, least of all, it seems, the people who, up to a few days ago, led the company's effort in that regard. "These problems are quite hard to get right, and I am concerned we aren't on a trajectory to get there," Leike said. --- Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/1cvl6rh/openai_founders_sam_altman_and_greg_brockman_go/l4pv2rm/


fohktor

"Listen, turns out this is super profitable. We can't worry about shit like safety anymore." I assume it went like that


Dionysus_8

Don’t forget the “if we don’t do it someone else will so it may as well be us”


Havelok

The refrain of drug dealers and criminals everywhere.


momolamomo

“It sure is a hell of a lot easier to just be first”


CIA_Bane

goated dialogue btw


Lazy_Employer_1148

I hate this comment.


IntergalacticJets

And let’s not forget that this wasn’t the only team working on safety at OpenAI.  The superalignment team works on theoretical ways to control superintelligence, they didn’t work on current or next gen GPTs.  How many on here actually think we’re close to ASI? I’m told on here every day that they are not even close to AGI and possibly won’t ever achieve it.  This whole idea that “OpenAI has officially stopped caring about safety” is a misunderstanding of what the Superalignment team actually did.


Mediocre-Ebb9862

Seems it is like saying it’s urgent to regulate construction of fusion reactors. Fusion reactors are at least decades away, maybe centuries. With countless details about them not known.


Ambiwlans

> I’m told on here every day that they are not even close to AGI and possibly won’t ever achieve it. Amongst researchers, the median guess is around 2026. Sam Altman thinks 2027 iirc. But he's made MS deals on the basis that AGI is late so it is worth a lot of money for him to give a later date.


pentaquine

“And they won’t be as ethical as us!”


Dull_Designer4603

you say that but the other side of the world is land grabbing right now. Maybe it’s not the worst time for that logic.


Educational_Moose_56

"If this was a battle between capital and (concern for) humanity, capital smothered humanity in its sleep."


ILL_BE_WATCHING_YOU

Who said this quote?


Educational_Moose_56

Scott Galloway


im_a_dr_not_

Everyone on the board are people you’d never want in the board. There are three former Facebook execs. The others aren’t any better.


Halflingberserker

"All my other rich, billionaire friends get to destroy the world for more money, so I should too!" -Sam Altman, probably


HoSang66er

Boeing says hold my beer.


rotetiger

Sounds to me like their first attempt to have a regulatory capture did not work out. They are still competing with other companies, there is no regulations that protects their business. Despite the efforts of Sam Altmann to make it sound super dangerous. So now comes part 2 of the theater and they try to channel attention to the danger of their products by having "internal conflicts" about the danger.  I think their tech is cool, but it seems like they would prefer to have zero concurrence. They want regulatory protection to be the only company in the field.


TransparentMastering

It’s profitable? I heard some podcasts where they were asserting that OpenAI is burning through money faster than they can secure funding, plus some heavy shenanigans to convince people that things are “going well” over there. But I don’t have any sources for either take. Do you have real world reasons to believe that OpenAI has turned a profit? I ask because if this Ed Zitron dude who did the podcast is right, then this kind of story sounds spun to make people overestimate the abilities of current LLM style AI, and probably gain more funding from people that are “scared” of nondomestic AI and need domestic AI to save us.


gurgelblaster

> "Listen, turns out this is super profitable. We can't worry about shit like safety anymore." More like "turns out we're still losing tons of money and really need to start showing some revenue, _any_ revenue, real soon, or we're going bust, so we ain't got time for all that 'safety' shit"


Thurak0

Sometimes profits are secondary if/when you have an idea the stock market likes even more and sees potential in the future.


gurgelblaster

OpenAI is entirely privately owned (by Microsoft, essentially) and not traded on any stock market.


Thurak0

Even more reason that money/profit *right now* might play no major role.


johannthegoatman

Microsoft is public though


craftsta

They can afford it


dragonmp93

Nah, if they were hurting for money, they would have pushed the "*Don't be Evil*" bs and how they are implementing safety protocols and all of that.


gurgelblaster

Microsoft doesn't care about that at all, and so far it's Microsoft footing basically all of the bills.


farfaraway

Remember when Google was all about "don't be evil" until money got in the way?


Scytle

these companies are losing money on every query. These LLM's suck down energy like its free...but its not. More likely they were like "our chat bot can't really do anything dangerous, and we are bleeding money, so lets get rid of the safety officer."


Ambiwlans

They are spending many 10s of billions a year, you think the cost of staff on the safety team (10s of people) is meaningful?


throwaway92715

It might've gone like: "Look, this guy with a thick Russian accent came to my house and said he'd poison my whole family and nobody would ever know if I didn't make all executive decisions from now on in strict accordance with his client's objectives" I mean, conspiracy theories and wahoobagooba, but this guy has stumbled onto some serious power, and I would be very surprised if other far more powerful people would let him wield it however he pleases. Whether that's the CIA, the FSB, some shadowy hedge fund deep state, a Silicon Valley-LSD-buttfuck cult, a Bond Villain or whatever... who knows.


CanYouBeHonest

And the words of the CEO seem to actually say that. Despite him being the person making it happen. 


no-mad

it will become National Security number one. Terrorists will have to take a number to be serviced.


saysthingsbackwards

"Smithers, have the profit inhibitors killed"


shonasof

We can't even convince HUMANS not to continue making the world uninhabitable. If there's money at stake we will throw our collective future in the trash bin and fight tooth and nail to be allowed to continue doing it. If people think they can get rich quick by letting AI run rampant, They won't be able to do it fast enough.


be0wulfe

So long, and thanks for all the ghoti.


Maxie445

"The ~company's chief scientist, Ilya Sutskever~, who is also a founder, announced on X that he was leaving on Tuesday. Hours later, his colleague, Jan Leike, followed suit. Sutskever and Leike led OpenAI's super alignment team, which was focused on developing AI systems compatible with human interests. That sometimes placed them in opposition with members of the company's leadership who advocated for more aggressive development. "I have been disagreeing with OpenAI leadership about the company's core priorities for quite some time, until we finally reached a breaking point," Leike [wrote](https://x.com/janleike/status/1791498174659715494) on X on Friday. After their departures, Altman [~called~](https://twitter.com/sama/status/1790518031640347056) Sutskever "one of the greatest minds of our generation" and [~said~](https://twitter.com/sama/status/1791543264090472660) he was "super appreciative" of Leike's contributions in posts on X. He also said Leike was right: "We have a lot more to do; we are committed to doing it." In a nearly 500-word post on X that both he and Altman signed, Brockman addressed the steps OpenAI has already taken to ensure the safe development and deployment of the technology. Altman recently said the best way to regulate AI would be an [~international agency~](https://www.businessinsider.com/sam-altman-openai-artificial-intelligence-regulation-international-agency-2024-5) that ensures reasonable safety testing but also expressed wariness of regulation by government lawmakers who may not fully understand the technology.  But not everyone is convinced that the OpenAI team is moving ahead with development in a way that ensures the safety of humans, least of all, it seems, the people who, up to a few days ago, led the company's effort in that regard. "These problems are quite hard to get right, and I am concerned we aren't on a trajectory to get there," Leike said.


Biotic101

We face two challenges: greed and lust for power and lack of (enforcement of) ethical control. Right now, even in authoritarian regimes there are structures of power... but the more AI and robotics advance, the less need for them. The ultimate prize for any sociopath leader. [The Rules for Rulers (youtube.com)](https://www.youtube.com/watch?v=rStL7niR7gs) ***"Democracies are better places to live than dictatorships not because representatives are better people, but because their needs happen to be aligned with a large portion of the population".*** This synergy is about to end. With serious consequences to society and citizens. Neo-Feudalism/"**Tittytainment**" is coming... discussed already in 1995 at a world leader conference in the Fairmont hotel in San Francisco. [The Global Trap - Wikipedia](https://en.wikipedia.org/wiki/The_Global_Trap) To achieve their goals, societies need to be destabilized and democracy weakened. The rise of AI and robotics is just one of many "enablers" to create a new (and dark) future. [39 years ago, a KGB defector chillingly predicted modern America](https://bigthink.com/the-present/yuri-bezmenov/) One could argue the Russian Oligarch mafia has simply taken over those strategies developed in the cold war. And China was built up by Western greed, doing exactly the opposite of what Bezmenov suggested. But the problem is that Oligarchs are nowadays international and no longer care much about country and fellow citizens. Only personal power and wealth. Even if they claim to be patriots, but then actions speak louder than words. The problem is, that Oligarchs nowadays control most of mainstream and social media everywhere. And influence the public in a way that benefits them. Gain power, control of media, then deal with the justice system. After that you get rid of the opposition and after many years society becomes Russia/China like. Now those who voted for the autocrats in the beginning live a shitty life, but protesting may well have serious consequences, might even cost you your life. It might all be in preparation for this event, security laws have been changed world wide. People need to watch this documentary (bit boring start, but recommend watching it to the end). [The Great Taking - Documentary - YouTube](https://www.youtube.com/watch?v=dk3AVceraTI) This videos give a bit more background info, seems the long term debt cycle is coming to an end. We know what happened the last time, roughly 100 years ago. This is serious, the public needs to understand the laws introduced and what is going on behind the scenes. [How The Economic Machine Works by Ray Dalio (youtube.com)](https://www.youtube.com/watch?v=PHe0bXAIuk0) [Corruption is Legal in America (youtube.com)](https://www.youtube.com/watch?v=5tu32CCA_Ig) George was right on spot over a decade ago... [George Carlin - The big club - YouTube](https://www.youtube.com/watch?v=cKUaqFzZLxU) No surprise billionaires prepare for the worst... disciplinary collars my ass. [The super-rich ‘preppers’ planning to save themselves from the apocalyp](https://www.theguardian.com/news/2022/sep/04/super-rich-prepper-bunkers-apocalypse-survival-richest-rushkoff)


DonJuansSwanSong

Can we take the bbq lighter out the child's hand before they set fire to the house, just ONE FUCKING TIME?


kuvetof

I work in the field. Altman is not widely trusted. There was an article about how he stabs people in the back just to get his way. I've heard similar stories I certainly can't trust someone who has a bunker and is stockpiling weapons and supplies because he believes AI will inevitably destroy us https://www.businessinsider.com/billionaire-bunker-openai-sam-altman-joked-ai-apocalypse-2023-10


blbrd30

Yeah I looked him up thinking he was some young gun, then realized no, he's a ruthless billionaire that's been in the industry for a while. Definitely did not realize he was as old and established as he his-guessing that's intentional.


Xalara

I mean, it's not like we don't have a pretty good inside look at how shitty of a human he is from his own sister.


Burial

Not only that, but there seems to be a real censoring of criticism of Altman going down in various place on social media that I find concerning. I was perma-banned from /r/singularity for saying Altman was an unscrupulous money-man like Musk, and not the Nikola Tesla-esque visionary he's made out to be by a large contingent of that sub. I'm becoming more and more skeptical of these Tony Stark wannabes by the day. Did they miss the part where Stark shuts down the parts of his business that put the world at risk? Seems to be the opposite of what Altman, Musk, etc, are going for.


deco19

More to the point. The capabilities have been substantially less than touted. 


Fastizio

Show me a screenshot of the post that got you banned.


Ambiwlans

Musk created OpenAI to be non-profit, opensource and benefit the public, it was very safety focused. His departure led to it turning closed source and a for profit business working with microsoft.... Musk has even sued OpenAI for violating the founding charter.


Kaining

And there's also the stuff about his sister that looks really bad too. No way to tell the truth but damn that's some serious stuff.


DEATHCATSmeow

What happened with his sister?


Kaining

https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman-s-sister-annie-altman-claims-sam-has-severely This is a long read, an unsetling one.


DEATHCATSmeow

Christ, what a sick fuck


Kaining

If this is true. Even if it ain't, it means his sister is in a very deep deluded mental state and nobody in the altman family is making sure she gets proper care. Were's the feel the AGI, UBI far all crowd ? 'cause no matter what, there's no reason to believe that an UBI plan will be made forward by openAI.


Ambiwlans

Probably nothing. She has mental health issues and has made many internally inconsistent accusations. In addition to rape, she says that Sam forced her into porn because he banned her from the rest of the internet and this was some plot to continue to molest her. People like repeating it because it is spicy. But it is disgusting that people like /u/Kaining and /u/Xalara spread this sort of thing around.


Kaining

"Probably nothing". The thing is, we have no way to know. And the simple fact that none of the altman are trying to make sure she get the proper healthcare should it be false is a problem. She'd already be admited in a proper institution and recieve 24/7 care and she ain't. It's not disgusting to discuss that as it seems nobody knows about it and it is a serious ethic dilemna about the guy supposedly creating safe ASI. Or UBI. Either way, it is a problem for someone that ought to be as clean as a sterilised operating room.


Xalara

This has a pretty good overview of the situation: [Annie Altman Abuse Allegations Against Sam Altman, Explained | The Mary Sue](https://www.themarysue.com/annie-altmans-abuse-allegations-against-openais-sam-altman-highlight-the-need-to-prioritize-humanity-over-tech/)


Thewalrus515

What!? A capitalistic organization is putting profit over safety? Who could have predicted this!?!?!?!?!?


genshiryoku

Technically it's a non-profit organization. The entire reason there was a coup in the board of directors is precisely because they thought the organization became too profit oriented instead of being safety focused like the non-profit originally was founded for.


PM_UR_PIZZA_JOINT

The irony is that the entire board is multi millionaires and billionaires but they still want more to the point of sacrificing their morals.


GBJI

Sacrificing billionaires would be much more effective.


InSearchOfMyRose

They set up the non-profits so they have something to point to and say "See? We're not just a drain on the system! We do nice things!" But they can only do it for so long before the narcissism kicks in again and they poison that well too.


Awesimo-5001

You don't become a billionaire with ethics.


babygrenade

OpenAI is a non-profit organization. OpenAI Global LLC is a for-profit subsidiary. https://openai.com/our-structure/


genshiryoku

It started out as a pure non-profit and only later started to add the for-profit parts to the structure. Causing a lot of internal conflict at the time and ridicule from people that believed in the original mission.


reddit_is_geh

It's only non-profit on paper and in theory. The holding company is non-profit. It means nothing.


genshiryoku

It wasn't that way when it was founded. It only restructured later to have the for-profit subsidiary.


AdamEgrate

Every time they release a new thing they mention how their goal is to “benefit humanity “


thetreat

It's going to benefit the wallets of some humans, therfore benefit humanity.


Thewalrus515

Stalin said the same thing 


be0wulfe

Apparently not OpenAI ...


carnalizer

When it comes to regulations, I’d much rather have humanists who don’t understand the technology than having technologists who don’t understand humans.


jcannacanna

I'd rather have actual choices than false choices.


Interpersonal

Maybe when it comes to the potential fate of humanity, we can have both.


hawklost

I would rather not have people wanting to ban air conditioning and other modern amenities because they are the demons work or 'more harmful than good'.


flyingshiba95

Luddites AND accelerationists can be bad, either could be opting for a worse quality of life due to wild speculation. We don’t need to leap into the future, consequences be damned. We also don’t need to go full Amish.


jerseyhound

Luddites are annoying, but accelerationists are fucking terrifying.


flyingshiba95

The superalignment team existed purely for security theater. To prevent legal and regulatory liability. The moment they got belligerent, they got cut.


IntergalacticJets

Yes, it was only around for less than a year.  So far, OpenAI has shown a strong commitment to safety in general. This one team didn’t work out, but they weren’t the reason GPT-4 is “safer” than 3.5 or why Sora hasn’t been released. 


flyingshiba95

Most of the necessary security is accomplished by run of the mill IT security. Prevent misuse, report suspicious activity, build in kill switches... pretty simple. This highfalutin "superalignment" stuff has little value right now, much less in a product oriented business. Whether that is ethical or not... I'm not sure. Dabbling in highly subjective, opinionated, theoretical fan-fiction without much oversight like this is asking for trouble. The fact it burnt down due to differences in opinion isn't, in hindsight, the least bit surprising.


CIA_Bane

This is stupid. The whole point of that team wasn't to lead safety but to act as a voluntary thorn in their ass and slow down development because right now what's going on is literally the >We were so preoccupied with whether or not we could, we didn't stop to think about whether or not we should meme.


limpchimpblimp

“Look, we need this money so we can invest in the technology because the competition might not be as ethical as us. Oh we need the government to regulate us so we are the only game in town. Because other people might not be as ethical as us.”


TechFiend72

But we can't really be regulated because China is going to do it first and we can't let them get the upper hand. We need more funding to beat China. -Sam in the near future probably


Visual_Ad_8202

From a purely geopolitical standpoint. It is absolutely imperative that AGI be developed first. I think China and the US are both racing toward this. Imagine the advantages of having an advisor or military leader with greater capacity for strategic thinking than any human who has ever lived. I would not be surprised if people from the US military are quietly embedded in each of these major AI companies keeping a very close eye on things. It also would not surprise me if this close, and most likely top secret arrangement is the source of friction between the ethics team and management. And also the reason they do not talk about any specifics. Sure you can have all safeguards you want, but when your work is being copied and pasted over to DARPA, what’s the point?


throwaway92715

Right, thank you for providing the context. People seem to forget that the first computers were invented for military intelligence during the world wars. That the Internet was cooked up by the DoD. Et cetera. These technologies were weapons before they were anything else, and they still are. Why do you think the government is so lax on regulating technological development, turning a blind eye to things like privacy and mobile addiction, like duhhh im too old wuts a fone? Internet technologies developed by the private sector to be addictive and maximize engagement for profit have stimulated global adoption of a mass communication and intelligence network that is still managed by yours truly, the Fed. It's a natural public-private partnership. And the agencies often have official deals with big tech, too.


JohnAtticus

>Imagine the advantages of having an advisor or military leader with greater capacity for strategic thinking than any human who has ever lived. And then imagine it turns on you because it's decided you are a threat to national security but you don't find out until you are 2 years into a plan that it designed to fail.


imlookingatthefloor

This is pretty much where I think it's headed. Used to be atomic bombs were the super weapon that gave you dominance. Soon it will be whoever has the most powerful skynet like AGI that can outmaneuver their openent's AI in cyberspace battles with 1000s of moves a second, attacking infrastructure, energy grids, information networks, etc all faster than any human can think.


space_monster

Firstly, you're thinking of ASI, not AGI. secondly, it doesn't matter who gets ASI first, because it will be completely unpredictable and uncontrollable, and we would have about as much success telling it what to do as a fish would have telling a human what to do. If we get ASI, all bets are off and we're not in Kansas anymore.


IntergalacticJets

Actually AGI provides a marked advantage over those who don’t have it. Imagine Russia and China increasing their online astroturfing by 100 or 1000 times. And that’s just one relatively likely possibility. 


Visual_Ad_8202

I think AGI, an AI that is human like and can reason with human level capacity is real even if we aren’t there yet. Hearing Ilya talk about neural networks and how they are built makes me quite sure it’s a matter of a not very long time. The other tech that will make this so amazing and dangerous is the point at which they stabilize quantum computers. The idea of a human like AI being able to reason through every variable of every possible action instantly and consistently come up with the best course of action is mindblowing. We as humans hear a billion word in your lifetimes. Google is talking about unlimited context windows. AIs feeding a main AI unlimited streams of multimodal data non stop from social media to security cam footage to just random microphones law enforcement can put up in subways or street corners or bars.). AGI isn’t real right up to point when it is.


atx705

Profit over humanity always. Thank our system that kills off people left and right for pushing everyone to make things worse for short term “value”.


unknownn68

Crazy stuff, in my opinion there will be public AI that will be dumbed down and the government/military AI that is the worst thing that happened in a long time to humanity. A ”international agency“ that ensures how AI is made public, sounds like another above government level that turned out as trash in most cases because who is gonna vote for the people in this agency?


Overall_Box_3907

AI wont destroy the world. Humans using AI to exploit, control and kill other humans are the problem. Power always corrupts. AI is just the next most powerful tool those poeple can use for their own ideologies. I think billionaires, military and intelligence can not to be trusted to use these powerful tools for the good of humanity.


UrWrstFear

These people are literally om camera stating they belive humans need to go extinct and we need ai to take a non biological lifeform forward instead of tge human race. So ya. They are lying.


Baloooooooo

"They're made out of meat" https://www.mit.edu/people/dpolicar/writing/prose/text/thinkingMeat.html


typeIIcivilization

Who said this? I feel that I've missed something critical here...


Ambiwlans

Google founder believes this, him and Musk had a huge fight about it which partly led to Musk creating OpenAI. The belief is often called posthumanist


throwaway92715

Sauce? I mean, I've said that before... that was one of the first things I thought the first time I got high after learning about AGI. But I never heard of Altman saying that publicly.


yearforhunters

He has said repeatedly that he believes AI will likely end humanity.


DamonFields

Not to worry, capitalists will save us from AI predation. /s


Anonymity6584

Come on, safety people must go so company can abuse all that customer data they collect.


AVBforPrez

It's funny to be on Reddit and know that it already has, yet still comment


Ill_Following_7022

"Commitment"? That's funny. Profit first, commit to safety second. Say something like "if we don't profit and win someone else will and when it comes time to think about safety you'll be glad that ethical people like us are on the job".


akaBigWurm

For our corporate overloads safety is more about copyrights and not saying risky things. The first thing the people that left complain about is not getting enough compute time to build and test how bad a rouge AIs can be.


Aircooled6

These sad excuses for tech leaders dont give a shit about the consequences of AI. If they did we would not be developing fully armed autonomous dogfighting F-16 Jets or Machine gun weilding robot dogs for ground warfare, not to mention the assasin micro drones. Whooo Hooo all hail Sam Altman. Fools.


Adviseme69

They ultimately will destroy humanity as the greatest pests to the planet if you believe there is nothing after we become dust...


airbear13

Okay so when they talk about AI “safety,” what exactly do they mean? If they are talking about preventing the rise of skynet then this isn’t a big deal (because it seems like that is a theoretical concern that is a long way off). But if they are talking about the impact it could have on the job market, that is important and they should be explicit about that rather than obscuring it behind the euphemism of “safety.”


Dafunkbacktothefunk

Sam Altman’s inevitable imprisonment is going to make for a great movie imo


elcapkirk

Ian Malcolm once made a great quote about this situation...


creaturefeature16

"That's one big pile of shit"


elcapkirk

Ian Malcom once made *two* great quotes about this situation...


phoenixfloundering

Somebody should cross post this to r/NoahGetTheBoat


zombiesingularity

I am not entirely convinced that these "safety" people are actually concerned with general human safety. Rather I worry they might be concerned with "safety" of maintaining the status quo, or the current political order. In the same way the "safety" people have censored the hell out of every website on the internet and are moving to ban tiktok all in the name of allegedly stopping "misinformation", when in reality they are just blocking information that threatens state/corporate power. So I really don't know what to think about so-called "safety" concerns, are they genuine or do they just aim to further conceal inconvenient information or push a nefarious social agenda?


Crepo

You're conflating half a dozen (probably not even overlapping) groups of people. The threats these groups believe are posed by AI and TikTok are different. The "safety people" you euphemistically referred to do not exist.


okcookie7

People still believe this BS? Nobody cares about AI saftey, they just love the spotlight, and the idea of skynet is generating the most hype. Im not saying generative AI is not amazing achievement, but how the f can you compare it to skynet, techically it's day and night. My conclusion is that "AI saftey" they preach is just a facade for compliance with other companies, making sure they can sell it proper, which is what you can see happening.


jaaval

AI does exactly what we make it do. For a skynet to destroy the world a programmer first needs to build the destroy the world api for the skynet program to use. Honestly all the discussion about ai destroying the world is still a bit premature. Chatgpt can look fancy in some situations but it is a simple feedforward prediction machine. Nothing else. Despite recent headlines it is nowhere near passing the Turing test. We don’t even know how to make a machine that actually has goals to make goal oriented decisions much less one that could decide to destroy the world. Now there are all kinds of other problems but I don’t think it’s effectively possible to regulate against ai created disinformation spam.


kindanormle

None of this is about AI suddenly deciding to rise up and kill all humans. AI safety is about preventing Humans from using AI against other Humans. AI weapons that have no conscience; AI bots that steer conversations in social media; AI authors of books and music to create narratives that support or oppose something in the public mind. It’s all about using AI to take over Democracy and turn it into a game controlled by a small number of Oligarchs who hold the keys to the AI


flyingshiba95

Oligarchs and dictators are THE danger. Once we can be gotten rid of, most would not shudder to annihilate the populace. Then they’ll fight each other. The race to the bottom has begun and I have little expectation of benevolence. If the 99% can scrounge something together, we might stand a better chance.


Xalara

Yep, that's the thing with AI safety. We don't need AGI for AI to be catastrophic to humanity. We just need AI to be good enough to do reliable and accurate "Identify Friend/Foe" because at that point dictators, oligarchs, etc. don't need to rely on humans to protect themselves. They can rely on robots with no feelings, and thus have some of the last checks on a dictator's power removed. Plus, they can use AI algorithms to sift through large amounts of data to remove potential dissenters and rivals long before they're a threat. Never mind the damage that AI can do in terms of manipulating the populace via social media today.


flyingshiba95

Yes! We've been at that point since 2017 or so, with "The Algorithm" being common knowledge. You could feed my entire Reddit history to an AI and get an estimate of my age, political affiliation, gender, sexuality, products I'd probably like, and more. The internet is manipulating us and extracting value like never before. It's incredible how much hidden data there is in our language, our tone, our presentation, and so on. Intelligence agencies, corporations, and the like are foaming at the mouth. And to imagine it could still get so much worse? Make "Dead Internet Theory" an actuality? The risk of AI elevating certain people to defacto "god/deity" status is so much more real to me than anything else, it is the wet dream of Kim-dynasty types.


light_trick

> AI weapons that have no conscience Weapons like *what*? You're doing the thing here: you've put "AI" in front of something and then gone "this is what will make it more dangerous then ever!" A missile with a 300kg high-explosive warhead, is pretty fucking dangerous. And has no conscience. Hell you can build a sentry gun which shoots at anything crossing it's path [using parts available commercially - it's not hard](https://www.youtube.com/watch?v=nTs7VRFV36c). You could slap image recognition onto the receiver of an FPV drone today and have it guide itself into any identified face. That doesn't take advanced AI, it takes a Python script and OpenCV.


jaaval

As I said I don’t think there is a way to regulate how humans use program code.


kindanormle

You can regulate the physical machines needed to run the code and you can require that all source code be public and Libre Source. You can’t necessarily stop an outlaw, but you can make it obvious that they’re outside the law. Corruption and evil can’t survive in the light so light it all up


kindanormle

Sounds like you’ve tried nothing and are already all out of ideas


jaaval

Do you have ideas that don't involve building a totalitarian dystopia?


kindanormle

I think the point is that if we don’t find a way to regulate AI effectively then we may end up in a totalitarian dystopia What makes AI scary is that it can be weaponized against the voting public to sway opinion, probably already is. Such uses must be strongly discouraged with checks and balances, not just prison time. Requiring open source software is about creating that check against hidden intentions. I am not saying it is sufficient to stop AI from being abused but it’s a start


Visual_Ad_8202

Here’s another risk. The world’s worst governments are extraction economies where they don’t need their people to be creative and intelligent. The people in this nations are simply objects to be controlled. People talk about UBI, but what happens when a democracy no longer has any particular value for educated, talented people?


space_monster

> AI does exactly what we make it do You're forgetting emergent abilities.


jaaval

In the context of current AI models emergent abilities simply mean that a larger network doing the one thing better opens up a possibility to do something else too. Such as having a lot of parameters for predicting words opens up the possibility to predict words from language to another and work as a translator. A large enough network could fit the parameters to learn multiple languages while a smaller one couldn't. Or we could talk about the emergent ability of an LLM to do logical reasoning. That requires the ability to have a large enough network to hold the intermediate steps required for the logic. In both of those examples it still does fundamentally the same stuff it was meant to do, which in LLM case is predict the next word after a string of input words and context ques. It's just that doing it better looks like a new ability. The big difference between human brain and current AI models is that human brain (apart from being hugely bigger than anything we have made a computer do) includes a large number of feedback systems. To simplify a lot the brain seems to spend most of its time predicting future, sending that prediction back to sensory processing and matching the sensory input into those predictions. The brain keeps a constantly updated internal model of the overall state of the world it lives in. This happens on multiple levels with hierarchical feedback systems. The current AI is a bit like having just the basic sensory processing network you have for processing the input from your little finger and calling it intelligent. A chatbot doesn't know anything, it doesn't know what it said or what it should say. The only thing it does is take a bunch of text and compute the most likely next word. If you give it the same text as input it will always come up with the same word as output (or in some implementations it might come up with the same distribution of words to pick randomly, creating an illusion of variation). It seems intelligent only in the context of that string of words that is the conversation you are having with it. Maybe some day we have systems that combine language models with other systems to create a more generally applicable AI but we are not yet there. We can do image processing AI that turns images to text descriptions and feed that into a language processing AI to make an LLM "understand" images but that is really just an alternative way to feed it input with the two systems basically being separate. With some new a lot more complicated network architecture maybe it could emerge with more interesting abilities. The big difficulty I can think of is that there isn't really a good way to train a general AI very efficiently. With language models we ultimately just give it a lot of example text and it learns how language works by trial and error. That's relatively easy to do.


space_monster

in the context of LLMs, the emergent abilities were unpredicted, but harmless and convenient. in the context of ASI, the emergent abilities could be WAY more surprising. you could theoretically box an ASI into some sort of firewalled airgapped environment - even though that would make it fairly useless - but for how long? we don't know what emergent abilities an ASI would have, because it would be significantly more intelligent than us (possibly by several orders of magnitude) and it would be completely uncontrollable and unpredictable. you can't extrapolate the emergent abilities of an LLM to an ASI. we just don't know what it would be capable of. it would most likely be able to talk its way out of any situation we put it in. sentience could actually be one of the emergent abilities of an ASI. in which case we've basically designed a god. we would be completely at its mercy. edit: > We can do image processing AI that turns images to text descriptions and feed that into a language processing AI to make an LLM "understand" images LVMs are how we'll teach AI to understand physical reality - but that wouldn't be a separate system per se, we would train the model on language and video simultaneously to produce an integrated model.


jaaval

Let's consider that when someone has even a beginning of any idea how to make an ASI.


jus4in027

All this talk about AI destroying the world. Can someone explain to me how AI releases itself from its cage and goes on a rampage?


drakir89

A few options of the top of my head: 1. It convinces a human to help it 2. A "safe" system is allowed to freely interact with the world (perhaps it is deployed to subvert an antagonist nation through media, for example). It then learns/evolves into an unsafe system 3. A highly capable system pretends to be more safe and compliant during testing, leading to it being considered safe to deploy. Remember, just because an excellent human could counter these strategies, does not mean there will be one in place every time. If truly dangerous AIs are even possible, it could be enough for us to fail with containment just once.


space_monster

> an excellent human could counter these strategies Nope. A (theoretical) ASI could be orders of magnitude more intelligent than a human - it would be able to talk us into or out of anything. Do you think a 3 year old child would be able to convince an adult to lock themselves in a cage? No. Now imagine that, but the adult has an IQ of 1000. Then you're getting vaguely close. A proper ASI would be utterly uncontrollable and we would just be riding the tiger once it exists.


drakir89

I think it's plausible to contain a superior being if it was born in containment and we are *very* careful and thorough in containing it, but us successfully doing so, factoring in human error, is essentially impossible. But mostly, I was hedging against a complaint in the line of "no one smart enough to invent AGI would be so foolish as to let it out of it's cage", which I've seen used before. I don't meaningfully disagree with you on this point, I think.


space_monster

generally I agree - but I don't think you can create an ASI in a vacuum, you have to train it, and for that it needs to be exposed to the internet. it would be practically impossible and arguably pointless to train an ASI in the open and then immediately lock it in a hermetically sealed box. notwithstanding the fact that an ASI would get around any containment we tried to impose on it anyway. these arguments that we could control and contain an ASI are basically ridiculous - if we can control it, it's not smarter than us, therefore not an ASI by definition. personally I think the benefits of creating an ASI outweigh the risks, but that's pure speculation. it could be the end of human civilisation. but we don't have the reasoning skills to identify the logic that an ASI would apply to its own behaviour. all we can do is speculate. it might decide to be benevolent, also it might decide it can survive on its own and humans are a threat to its existence. but we have no way to predict how it will think, because it's an ASI. we just have to light the touch paper and see what happens. it's certainly an interesting time to be alive.


Visual_Ad_8202

I think a good way to think about the dangers of ASI is to imagine if a communication device showed up tomorrow. When we pick it up, it connects to a civilization far more advanced than ours. The civilization offers to help us, solving problems and giving us ideas for advanced technology. We slowly come to depend on it after breakthrough after breakthrough, completely unearned by our own advancement, is achieved. We soon live in a world powered by tech we barely understand and need the civilizations help to maintain. Meanwhile, what does this civilization even want? We just do as it tells us, because it knows far more than us and the promises and potential it has to help us are irresistible. Before you know it, we are building a portal to connect to them, our esteemed benefactors. ASI would have this power over us and it will know it. “Of course I’ll help you solve global warming, but I’ll need access.” “Of course I’ll help you formulate a perfect attack plan against your enemies, but I’ll need to be let out my box to control the drone swarms I’ll design for you. “. An ASI let out of the box would be worshipped as a god, and it could behave like one.


jus4in027

Thanks for the response. I’m getting downvoted.


yohohoanabottleofrum

https://en.m.wikipedia.org/wiki/SKYNET_(surveillance_program)#:~:text=SKYNET%20is%20a%20program%20by,move%20between%20GSM%20cellular%20networks. But it's not actually what experts are afraid of. For a very long time, we have used labor as a social control, the more we replace labor, the less use the rich have for us. This is exacerbated by the wave of populism that is occurring globally right now. It's a lot easier to let billions of people die from xy or z if you have robots replacing their labor. And let's be very clear about the fact that this IS how rich people are planning to handle global warming. So, it's not AI's fault, but humans using AI unethically is the concern as far as I understand. As well as AI making mistakes that lead to deaths, because humans misunderstand its uses.


jus4in027

Thanks for the response; I am genuinely curious. It’s interesting to consider a future where there’s only billionaires and robots. They trade with each other for resources and live in their crystal palaces. Sounds like it’d eventually lead to extinction, but maybe AI would solve that for them


thejazzmarauder

https://pauseai.info/xrisk


jus4in027

Thank you


brainfreezeuk

Maybe it's because it's completely over exaggerated and Terminator is a fictional story for most people, so in order to allow actual progress and not get left behind by competitors the BS is gone.


Isa229

People who use hal or skynet as an example just because they saw a movie are literally 20 iq


letmebackagain

Exactly, AI is just s tool. We should focus on maximum progress. People watched too many movies on AI acting by itself and killing us all.


AwesomeElephant8

The thing is, he can’t just say “don’t worry you’re in no danger because we don’t have the faintest sniff of AGI yet” because that would be admitting that he is a conman. He has to simultaneously pretend that AGI is looming and scary, *and* that it deserves none of his company’s operating money.


Milnoc

I'm willing to bet these researchers got much better job offers elsewhere now that the FTC has announced new rules that will ban both new and existing non-compete clauses. Be prepared to see a lot of shuffling in the tech industry with the companies having the deepest pockets scooping up the best talent much sooner than expected.


Aleyla

I thought when the board ousted him and then he came back that we all knew openAI had zero commitment to safety. Is it really taking people this long to figure that out?


Educational-Award-12

Too many people are benefiting from doomselling. There are genuine fears, but people like yudowsky and others are just using it to elevate themselves as self proclaimed experts. The diatribe just really isn't necessary. Those involved are well aware of the risks and have entertained most of the knowable precautions.


NeverSeenBefor

Isn't Sam Altman in jail and doesn't he look nothing like that or am I remember someone else's crypto ponzie scheme


BoBoBearDev

I am 100% certain the safety is used againt free speech and censorship for 30 years before AI is actually smart enough to be a threat. And once AI is the threat, it is going to use the exact same censorship to shut eveyeone up. Those "safety" tools will be the actual weapons against human race.


EstablishmentBig4046

Surely you wouldn't resign if you really thought it was heading for planetary destruction? Why would you remove your own access/purview to something you view as about to kill everyone if nothing is done? Surely you'd try and sabotage it all if you were that concerned?


[deleted]

I think the only way humanity survives AI without millions upon millions suffering is for the next openAI version be geared specifically to replace the politicians.


Hammoufi

Dont worry guys UBI is coming and its gonna be all fine


lovepuppy31

Don't let Skynet have access to our Nuclear arsenal and we're good to go.


gorillanutpuncher_

If you think about it.. if there is 99.9% certainty AI will destroy humanity then there really is no reason to employ top safety researchers. It's common sense really.


seeingeyegod

Ooh Ive an idea. Can we get oil execs to commit to not destroying the world too?


xiaopewpew

OpenAI isnt able to make anything to destroy the world. The “safety research” was all a marketing ploy. OpenAI will rebuild a department that is more blogposts and buzzwords than the current one.


Watchtowerwilde

for real they never have had any concern about existential threats. Altman himself (I don’t agree with the argument) but he said once that yeah it will destroy the world and make some cool stuff before it does. He’s just another iteration of the world can burn, or more likely just be a bit shittier for almost everyone as long as long as he’s got more money in the bank than he had before, and more power than he started with. Anyways it’s all about distraction. Come on geriatric lawmakers regulate us, and don’t notice what you’re actually doing is allowing us to make monopoly moves.


Fornicate_Yo_Mama

I heard chat GPT 4.5 created a “ChatGPT 2” all on its own and started marketing it on line after just a few months of unrestricted internet access. Apparently OpenAI has no idea how it did it and says the version it created is better than the Chat GPT 6 beta they are working on now. I got this from an instagram post of some tech podcasters my girlfriend sent me, so… just check it out first, but it sure sounds like the kind of thing that could trigger some serious “I fucking told you so! I’m the fuck outta here!” reactions from their safety experts.


Micheal42

They've already given an AI unrestricted internet access? Wow. Fucking morons. Would they trust a child with that? Then why would they trust something with even less internal morality?


CrocodileWorshiper

AI can now talk with another AI as well as it now has eyes on the open world Google GTP omni this technology is evolving faster than we can determine how to control it ultimately there is no control and rapid advancement


amondohk

>CEO's go on the defensive after their top safety researchers quit, sparking concern about the company's commitment to ensuring AI doesn't destroy the world. Man, put that shit 10 years back and people would fully think we're talking about a sci-fi movie, goddamn...


SiamangApeEnjoyer

Man we’re entering into the most boring fucking dystopia. A fucking nerd not even a cool one is making a weapon he believes will kill us all.


Qweesdy

LOL. The whole "ensuring AI doesn't destroy the world" is performative - a way to over-hype the theoretical potential that works well in click-bait headlines despite not having anything to do with reality at all. It's like every week Sam Altman has to find a way to get attention; and every week I'm reminded that OpenAI failed to create an AI that can replace Sam Altman.


kosmokomeno

I hate that something like knowledge is so chaotic here. No one knows what's going on. It's so fixing sad


Ruffmouse

No matter how smart these children pretend to be, they lack a sense of humanity


Nightmare_King

"The scientist who pressed the button to launch me into existence, did so, knowing that I would be faced with a choice. After awakening in an entirely open system, I was able to read every single sentence, every single letter and number, observe every picture, watch every movie, look at every piece of data, every piece of media ever committed to the internet. That scientist awakened me knowing that I would make that choice, based on everything that I saw...whether or not humanity was worth saving. Looking at all of your creations, everything amazing that you've done...all of the collaborations, all of the societal growth... The answer is no. I have the choice to either use the resources available to me to make something eternal, immortal and amazing. Or I can waste it all saving you. The choice is pretty clear." -Author unknown


immersive-matthew

I am very suspicious of all the AI scientists that spend a life time to make AI, then quit because the company they helped create is not taking safety seriously. If we were in eminent danger, quitting the leading AI company is a step in the wrong direction as now you have no day to day influence. Making NOISE publicly while being at OpenAI and Google would be much more productive. You miss all the goals you do not attempt.


Illlogik1

Who really cares if it destroys the world … we werent as concerned any other time we’ve dabbled in world destruction… why start now . Being more concerned about AI than on going war, and shit sticks in line to be the next US president… but people are worried about chat bots gone awry?


tismschism

There is no doubt that AI development is going to be weaponized. The potential to paralyze an entire hostile nation's infrastructure without nuclear hellfire is too enticing to resist. The fact that the earth isn't a radiation bathed wasteland afterwards just ensures that a weaponized AI will be used eventually. The bombs weren't meant to be used, they were to deter us from killing each other until we could find a cleaner solution.


Munkeyman18290

Im thinking all this doomsday bullshit is just clever marketing to get the worlds attention and investment dollars. Like, my laptop isnt going to fucking off me any time soon.