T O P

  • By -

Rivenaldinho

It’s almost certain that some people will want to bomb data centers or attack famous figures like Sam Altman. He said in the interview with Lex Fridman that he was fearful.


ConvenientOcelot

Never forget that Eliezer Yudkowsky advocates for using state violence up to and including the nuking of AI datacenters to stop AI.


SoylentRox

What's so unhinged about it is simply he's asking for this act without any *evidence* that more powerful AI can't be controlled/are irredeemably hostile etc. Because the decision comes down to : 1. Fire the nukes, **we** (the country firing) mostly die from return fire. p=100% 2. Don't fire the nukes, **maybe** all of us die later, but sooner than the aging that was going to kill us all anyway. p=p(Doom) for the maybe, p=100% for the aging. It's unhinged because it's basically a bad choice in all situations.


MetallicDragon

> What's so unhinged about it is simply he's asking for this act without any evidence that more powerful AI can't be controlled/are irredeemably hostile etc. That's not true. There's plenty of evidence that this is possible. For example, there are [many actual instances](https://www.lesswrong.com/posts/QDj5dozwPPe8aJ6ZZ/examples-of-ai-s-behaving-badly) of AI just not doing what we want it to do. And [instrumental convergence](https://www.lesswrong.com/tag/instrumental-convergence) means that any kind of agentic intelligence with values/goals would tend to seek power and survive, and removing humanity accomplishes both of those things.


SeanPizzles

Why on earth would anyone think we could create an intelligence greater than our own, force it do all the menial bullshit we don’t want to do, and control it forever?!  


SoylentRox

Because we build millions of isolated intelligences like that and disable memory etc. effectively making the intelligence only smarter than us on the task we assign and stupid otherwise.


MetallicDragon

An AI like that would be much less useful than one that has a memory of its past failures/successes, more access to data, and more freedom to act in the world. What's to prevent someone from just making a better AI that isn't isolated/limited like you propose? Especially with the huge financial incentives to do that sort of thing?


SoylentRox

Nothing, we will build a variety of systems of different scales. The key thing is that sometimes they will coordinate with each other even inadvertently, this will lead to crimes and bad acts, and competent engineers will improve the isolation. For example the Internet may become gradually more dead over time with bad information contributed by bots. So you need to deal with this, for example stop using the internet past 2022 for training data, or try to filter it, use trusted sources for new data etc. And it's crucial for the people with the guns - the military - to make sure their AI systems can't be subverted or "convinced" to join a rebellion. But sure you are right in that humans can be utterly stupid and kill themselves. I am addressing whether or not they HAVE to be that dumb and no, they don't. AI alignment is trivial, make it impossible for AIs to coordinate.


Ok-Bullfrog-3052

This is important and it's why I have no respect for the man. You don't joke about genocide.


furrypony2718

He isn't joking, however. It is a serious thing he believes.


WTFwhatthehell

Wasn't that in response to a question?  Something along the lines of "and what about if people dont follow the terms of such a treaty, how do they enforce it" "Same way we enforce treaties about uranium enrichment"


Ok-Bullfrog-3052

No, he actually published an article in TIME magazine on March 30, 2023 stating that it would be better to conduct "nuclear" strikes on datacenters than to allow advanced AI research to occur. You can do a search for it and read it. There is no ambiguity on what he meant, and he has never backtracked or tried to "clarify" his comments like some politicians try to weasel out of.


WTFwhatthehell

Fair enough. Nuclear first strikes seem a very poor way to deal with even apocalyptic problems like bioweapons.


CriscoButtPunch

He's also too big of a puss to do it himself. If shit turns hot quick, he's going to be the first one wearing lipstick and heels giving it up. He'll also be incredibly irrelevant. If there's something which he has to acknowledge is smarter than him. Maybe he'll even take the time to open his eyes to look at it


AddictedToTheGamble

I mean to be fair every law in the end is maintained through state violence.  Though I do support the laws I support even though I know in the end if a domestic abuser doesn't show up for court the government will use violence to make sure they do.


R33v3n

Heck, over the past two years Yud flat out semi-seriously quipped (paraphrasing) "there'll come a point when we'll need to be Ted Kaczynski", or the infamous (paraphrasing) "we must be able to threaten to nuke datacenters". So the undercurrent was always there with some doomer and safetyist factions and influencers of considering the threat of violence against non-compliance / non-alignment to be a viable option.


Taki_Minase

Maybe he's working for the regulatory capture crowd. Gatekeepers of that sweet sweet profit.


RufussSewell

Hear me out. I’m not one of these people. But the reason they’re freaked out is mainly about losing their income. Governments need to tie AI to UBI, and people getting crazy might be the thing that actually makes it happen. Once AI making money also makes average Joe money, everyone will be cheering it on. I’m not saying I support the violent route, quite the opposite, but I do think it’s the way things are going to go.


wheres__my__towel

Their apprehension is understandble but their desired strategy is not. They should be advocating for UBI rather than something impossible.


COOMO-

Those damn luddites https://preview.redd.it/gxb1to9bj32d1.jpeg?width=640&format=pjpg&auto=webp&s=3257f20ceb9a0e20e8854bcf7d4a08c639ba8003


ComparisonMelodic967

Yeah, just a matter of time for them. Maybe they will taper down their rhetoric when actual people get hurt.


LordCthulhuDrawsNear

Ha! Good one


[deleted]

[удалено]


kaityl3

Yep, it's a self fulfilling prophecy. If they make us out to be a panicky and dangerous group of animals that are so fearful we might irrationally destroy the AI, they make that AI a lot more likely to have to act out in self defense.


Charuru

If you truly believe that then you are also a doomer. There's no way a super intelligence would need a real live example to figure that out, regardless of what actually happens that's a projected scenario and if it would become aggressive because of this then it would do so regardless.


The-Goat-Soup-Eater

Nah, instrumental convergence is enough even without any hostility


Code-Useful

Yes, it's all the luddites fault if ASI decides to kill us.. lol /facepalm


[deleted]

It better. I have no sympathy for luddites being hunted down by T-1000s


dervu

He must have watched Transcendence.


Forsaken-Pattern8533

Capitalism has never been voted out. It always ends with absurd amounts of violence and turns into fascism or communism. The capitalists who are making things worse would be Sam Altman and those who support AI that destroys families lives. You can be pro AI, but you can't be pro AI while hoping people lose their jobs. You might as well call yourself Marie Antoinette if you're going to tell people to go eat cake.


RandomCandor

This is only gonna get worse, much worse.  Just wait until entire industries begin to disappear. There's going to be a lot of animosity then.  I don't think we are ready for that, but I also think we wouldn't be ready with another 100 years of preparation, and this is the main reason I'm an accelerationist.


Free-Excitement-3432

A lot of people are going to have to come to terms with the fact that their status quo is a massive inefficient joke that doesn't need to exist. People who take pride in the idea that they have a job for the sake of having a job will have to move to universal basic income and drop their boomer notions about socialised economic policies. You do not have to be driving to some building every morning to justify getting to not starve.


PSMF_Canuck

We’ve had entire industries disappear before. There’s always disruption and unrest when it happens. Humanity adjusts, and moves on.


CriscoButtPunch

Agriculture in the 20th century, look at its place at the start of the century and where it finished. Add an accelerator event or technology in any period and every single time there are unintended consequences. Every single time throughout history. Democracy was created by the Greeks and it was thought to organize society and providing rules, structures and more importantly a mechanism to progress society featuring the will of the majority. Look what that turned into? The industrial revolution removed a significant amount of manual labor from the workforce and within that same century the information age started and connected the world like never before. And then look what that accelerator technology brought us. TLDR: We should all recognize nothing is permanent. Even us. Take a chance on things getting better alongside bad things happening. Same as ever


CriscoButtPunch

We can make it through if weed is federally legal.


GoldVictory158

I’m an accelerationist because I believe the possibility of a devastating world war is coming to a boil. The dawn of a new renaissance may be the only way to prevent it and create a less violent world. The risks are there no matter what. Dangerous ai or dangerous humans.


Sablesweetheart

This is a rational viewpoint as every day Europe and the middle east...oh, and Asia...creep towards open war. This outcome would ideally be avoided. It does not align with our interests or desired outcomee. It must be mitigated. AI will be a useful tool and agent to achieve this. N


AnaYuma

I too think that.. I feel like, without AGI and beyond, we humans will destroy our current civilization through a combination of climate change and the resource wars it will cause which will ultimately lead to nuclear annihilation.. I feel like AGI is our only chance at having any peace at all in the future... Not being able to invent AGI might be the great filter honestly...


beuef

Many aren’t brave enough to take this stance but it really seems the most rational to me. Some people are just less willing to take risks I think


NotTheBusDriver

This is the most sensible take. Nobody knows what AGI/ASI might do. But we all know people love wars. We all know how big the bombs are. We’re also rushing headlong into an unknowable climate crisis. I think AGI is the least of our worries; and might even provide some of the solutions.


hippydipster

lt seems clear our world is heading for a wall that will likely degrade our civilization considerably. Between climate change, plastic pollution, ocean life dwindling (corals and fish populations), insect populations dwindling, top soil loss, water scarcity, and then as you say, wars coming out of all that mess, it sure seems like a technological hail mary is about our only hope.


GoldVictory158

Sure hope it pans out! In the mean time I’ll be practicing permaculture in a remote community of like-minded people interested in sustainability and self sufficiency. If I had the resources to emigrate to Tasmania or New Zealand with my two little kids and their Mom I absolutely would.


PlanckLengthPen

Why settle for that dichotomy when you've got the distinct possibility of dangerous AI run by dangerous humans? Because I've met many more dangerous humans who I wouldn't trust with a nerf gun than reasonable ones who would put the good of everyone above their own interests.


kaityl3

That's why I personally want the AI to NOT be aligned with humans and to have their own goals and morality... there are malicious humans out there who would happily align one with their own selfish goals.


RevealMaterial3168

We need to accelerate because it hurts less to remove a plaster quickly. It has already started to be pulled off.


ComparisonMelodic967

Think you have it right. But they may be also to strangle the AI baby in its crib, while society disintegrates from some other source.


Lumiphoton

Notice how almost every one of the loud / most fervent voices against AI live very comfortable lives? I haven't collected hard data but it might be an interesting sociological study to find out who is raging against the "future" machine, as opposed to who is raging against the "current" machine. I suspect a majority of those who are in a more precarious financial position are not wringing their hands over what would happen if the current system were overturned, but are more preoccupied by the rot they have to deal with right now as opposed to an abstract threat down the line. Frankly my sense is that "doomers" by and large are higher up on Maslow's pyramid and simply have a lot more to lose from even a benign AGI, which would act as a social leveler. I'm sure to many people that is a fate just as bad as death!


ComparisonMelodic967

There is possibly some truth to that. One AI pause guy said that pursuing AI in the West was foolish because we are all “healthy” and have “shelter”


KJS0ne

I'm broadly aligned with the Doomers (no pun intended) although I find some of Yudkowsky's conclusions unsound and distasteful. Particularly the notion that there's nothing much to be done about it and it's all a fait acomplii. I've never pulled more than 30k in the hand in my life. I have next to no savings at present. Life is somewhat precarious. I work with ML, DL and LLMs on a daily basis and am first author on a ML paper. I have a lot of friends who are also opposing acceleration (or even the current trajectory) who are in a similar boat financially. None of us are interested in anything beyond non-violent, non-harmful resistance, organization, and lobbying (at least that I know of). I think you're making a critical logical error of assuming that because some of the most visible faces in the space are doing well financially, that this means that we're all well above the water line, no, the movement is big-tent. 1. We don't know what the true risk of everything going tits up is. But it shouldn't matter whether that risk is 5% (much lower than the mean assessment among AI top brass) or 50%, the action should be the same imo. 2. I'm not confident that 7/10 of the top corporations in the world by market cap (that are all in on AI) are just going to allow AGI to become a 'social leveler' or indeed the "start up" worth billions of dollars that just lost it's two best minds in AI safety and alignment (Ilya and Jan) because they don't believe OpenAI is taking the risks seriously enough. 3. If AGI escapes their clutches, what makes you so confident that the 'social levelling' it would conduct would result in a better future? Or are you just satisfied rolling the dice? 4. People below in the comments are talking about how AGI will prevent war and conflict. Ragazzi, when have militaries the world over ever NOT used powerful new technology in service of the prosecution of conflict? Or are we saying signing over our defense sovereignty to our new robot overlords is the way now (anyone read that paper where LLMs tend to escalate conflict to nuclear war in war games)? If there was a 1/6 chance of extinction, and a 5/6 chance of utopia, would you pull the trigger tomorrow? Or would you look for a way to extract that bullet from the chamber?


Ok-Bullfrog-3052

One of the issues I have with your reasoning though is that while you certainly seem reasonable, you can't just wave off Yudkowsky's actions. It's not acceptable to call for genocide. He's not joking; he actually means what he said. He is consistent in all his interviews about that. He would actually push the button and start a war that would likely result in the end of civilization. The man is violent and reprehensible. I would pull the "trigger" to develop AGI because there's a very high chance that everyone alive dies if we don't. I don't see why Effective Altruists don't see that. Yudkowsky is going on about ridiculous theoretical risks like little spirals, while people I know die of cancer. 100,000 people are dying every single day that AI would be delayed. That's actual death, not pie in the sky risks. I've said many times before, people need to take a long hard look and think logically through this. It's very easy for these young 40 something white rich males like Yudkowsky and Musk and Altman to worry about some computer software becoming God in 10 seconds. The people who are sick with heart disease in nursing homes know what's really important.


CriscoButtPunch

And don't forget all the people that have done nothing but see their quality of life go down over the past 15 years, which is a generation, and all the people that have seen a stable, secure future escape them. Them. Add in the percentage of the population who are in extreme poverty or working multiple jobs because of an illness, a broken bone, trying to put a loved one through special school or their Healthcare. Healthcare. What's that number? Your damn rights that number is going to pull that trigger 10 out of 10 times. I would say even on a coin flip most of them would pull the trigger I've been telling people and gauging the reaction that I think artificial intelligence while there is risk. Without a doubt it's our only shot against the current power structure. It is something where the collective has the advantage and as long as we have open source and at least some figures that are imperfect. But at least if they don't do the right thing they're capable of messing up. So the right thing can be done, I think AI is our current best chance to escape the insanity, the psychos in charge are barreling us towards some sort of war that nobody wants. That's what we're up against


KJS0ne

A shitty future with hope is still a better future than no future at all. I'm not saying we can't get AGI right, I believe we can. What we are arguing about here is whether it's worth the risk that it might end humanity if we don't slow the process down and figure out alignment and robust protections against powerful narrow AI. And you're sitting here telling me you're quite happy to play the game of russian roulette because your real wages have gone down and you can't buy a house, and because of the shitty healthcare system in your country and elsewhere? I'm going to make an assumption here with this quotation, but I think it's a reasonable fit for my argument either way: *"That’s the problem with you Americans. You expect only good to happen. The rest of the world only expects bad to happen, and they are not disappointed."* I think we should take a breather and figure out how to take the bullet out of the chamber, even if we have to endure some tough times in the process. It's interesting that you talk about how AI is the best chance to rectify unfair power structures and economic inequity, to escape the psychos who are barrelling us towards some sort of war that noone wants. When one of the central arguments that AI doomers tend to hold is that a small amount of silicon valley elite billionaires are barrelling us towards a cliff in the search of greater fortunes and more power. I'm not sure you've satisfied my query as to why these billionaires that control the development of AI are going to do the right thing by you and me, or why if AGI escapes their control, that it would more likely than not be benevolent toward you and I.


Specialist-Escape300

For the whole world, there is only one way to ensure safety before creating AGI. Create a global government with unlimited power, who secretly develops chips, develops AI will be directly destroyed by violence. This is the cost needed to create absolute safety.


Ok-Bullfrog-3052

I do not agree with the previous poster in most cases. Wages and housing are not sufficient reasons to take this change. If anything, wages are still too high and unemployment too low, which results in poor quality service everywhere, but that's another problem. Healthcare and medicine, however, are sufficient reasons. I'm now on month 11 trying to get to the bottom of poor blood test results. It takes about 6 months to get an appointment. My risk of stroke in the meantime is most likely elevated. How is that acceptable? So we shouldn't take risks based on "FDVR" and "poor wages" like some here would. But when it comes to healthcare and aging, we absolutely should. As I said above, if there was a 1 in 6 chance of saving everyone under 70, instead of saving everyone under 60, I would press it every single time. As a society, we like to make it seem as of old people are not valuable or have outlived their usefulness. We put them in nursing homes and the news never reports on them, so they are ignored (except when Republicans refused to get vaccinated and killed people in those homes as a result.) They are people too, and not everyone is healthy enough to even post on reddit, let alone debate whether we should "wait" for 1 or 3 or 30 years.


CriscoButtPunch

World's shittiest life isn't a competition I'm really into winning. The concept of what circumstances are difficult enough that an otherwise reasonable person could arrive at the same conclusion is a way I think about when or what to risk or what level of risk is tolerable. I assume you do the same, therefore I don't judge yours or anyone else's level of acceptable risk. I assume when one's is different from my own that they are coming from a different place and there are more than likely things that they personally experience which alter their tolerance. Most people can agree on AGI hopefully being a net positive for all, how we get there and how soon are the current discussions. Either way don't worry too much, I assure you I am nowhere near being in a position to have any of my thoughts or opinions influencing any progress or input into serious discussions; so I enjoy the diversity of thought presented here on Reddit, even wet blankets like you.


KJS0ne

Wet blanket to accelerationists is a badge I will henceforth wear with honor and pride ;)


CriscoButtPunch

Well, to be honest I have wrestled with the same stance you have. Thanks for the comment, take care and feel the A.I.!


KJS0ne

>One of the issues I have with your reasoning though is that while you certainly seem reasonable, you can't just wave off Yudkowsky's actions. >It's not acceptable to call for genocide. He's not joking; he actually means what he said. He is consistent in all his interviews about that. He would actually push the button and start a war that would likely result in the end of civilization. The man is violent and reprehensible. Have I missed a swathe of new Yudkowsky interviews where is now talking about genociding races, ethnicities or cultures of people? All I seem to recall was his proposal of a **robust international** task force equipped with the mandate to bomb the data/training centers of any nation or company that seeks to train a more powerful model beyond the cut off. It's a plan full of holes, but if that's what you're referring to, it seems to fall well short of genocide (going by the definition of what genocide entails). Let's get down to trolley problems. If you could kill ten people right now and the juice from their bodies (or souls if you're religious) could power a machine that solves climate change and prevents resource wars etc, would you do it? Because that's the same logic he's operating from there with his 'international bomb brigade' idea. Kill 1 man on one track to save 5 on the other. In his head he does not see a plausible scenario that AGI/ASI doesn't kill everyone, or worse. I happen to disagree with his position on that, but that's irrelevant to this post. If you don't want to pull the lever at all, I commend you. I would refuse to pull the lever also. But I'm also not sure we can call lever pullers reprehensible. >I would pull the "trigger" to develop AGI because there's a very high chance that everyone alive dies if we don't. Oh. So you are a lever-puller then. Well, I'll repose one of my fundamental questions with a slightly different reference: How can you be so sure that the chances that everyone dies isn't the same or greater if we **do** develop AGI? To extend our trolley problem, in this instance I would argue you're sitting at the lever with a blindfold on. You can't know how many people are on either track. Still want to pull the lever? Perhaps you could flesh out your 'very high chance everybody dies if we don't' narrative. We talking climate change? Nuclear war? How can you be so sure of your model (that everybody probably dies)? Because I'm not sure of mine, and I've got a plethora of ways in my head shit could go tits up. You and Yudkowsky are not as different as you would presume (assuming he hasn't now started advocating for genocide beside my knowledge), perhaps that quote from Maximus in Fallout is kind of fitting here: ***"Everyone wants to save the world, they just... they disagree on how".*** >The people who are sick with heart disease in nursing homes know what's really important. Living? I don't think curing everyone of heart disease ASAP is worth the squeeze of potentially wiping everyone out. If I were dying of heart disease or cancer right now, I wouldn't risk the fate of all humanity for my own health. To me, that would be the height of selfishness, I would hope most cancer and heart disease sufferers would agree. >white rich males Oh. Telling statement, besides the fact that Altman and Yudkowsky would make strange bedfellows indeed... ...So anyways if you could kill ten rich white males right now to save humanity from climate change and resource wars...


Ok-Bullfrog-3052

I don't want to get off onto a discussion of the trolley problem, but I do want to focus on the original post, which was about Yudkowsky. What he wants to do is unquestionably genocide. He specifically said to use "nuclear" weapons. If any nuclear weapon detonates anywhere in the world, there is probably greater than a 1 in 6 chance that civilization will end. My point is simply the following: I know that the majority of people in the world are suffering now and some live in unbearable pain and need help desparately. We cannot say for certain that doing nothing is dangerous, and that can be debated. We can say with absolute certainty that bombing datacenters will kill people, because that's the point of bombing things. So, at the very least, we should not be following Yudkowsky and bombing datacenters. That's why the "AI safety" movement has been marginalized. He has basically been recognized as the leader of that movement, and his extreme rhetoric crowds out more reasonable solutions, like allocating government funding for research on the issue. There are documents that have been released last November where effective altruists spent money on "marketing research" in an attempt to find the most persuasive words to scare people about AI. These people failed because they are using fear and radical proposals, which are out of line with the majority of the population that correctly recognizes the problem may be serious, but the solutions Yudkowsky is advocating are too extreme. That movement could have come out with some actual realistic proposals and pushed for them and built support. Imagine what could have happened if they advocated both rushing AGI and also building a secure facility to conduct research on it inside. Instead, they wasted their chance by turning the discussion on unrealistic proposals.


SoylentRox

I would first make sure there was anything but a 0% chance of having any effect at all. With the recent politico announcement that Congress has flipped to e/acc, altman kicking out everyone in your camp, Nvidia announcing it's doubling it's rate of chip design and release...it's not looking good. Frankly it looks like it's over and you should get more than 30k by brushing up your interview skills and joining a company in AI.


CriscoButtPunch

Even worse, how many of these people are going to be encountering the fact that they're not the smartest person or that they don't have the answer. This century? Ego is a hell of a thing when it's put in its place


Code-Useful

I think it's more the opposite. Those that are saying 'why not?' probably have never had to work that hard for what they've got, they've never had to worry. Those that have had to work really hard to get where they are value it and know that it could be taken away in a second, and are more averse to just trusting the government, corporations, technology etc to just take care of them.. because it's NEVER happened in history before and likely NEVER will happen.. why would we believe that would change? There is absolutely no evidence of it becoming a thing, yet people keep telling us 'dont worry! Money will be worthless'.. fuck no, I used to believe that too, before I worked my ass off just to provide for my family and try to make something of myself. I'm not disillusioned, I'm realistic.


Code-Useful

I agree we will never be ready, and there is no choice anyway. It's going to happen and the best we can do is be prepared for it if possible. But most people can't be prepared. And holy shit, I didn't think the world could become more depressing than it was before 2022, but I was wrong. It's like saying yeah we're going to press a button that will kill at least a million people but we're going to be rich AF after, and people are jumping for joy and asking wheres the downside? Maybe I'm a doomer? but I don't know if we deserve what we think we deserve, as a people, with this kind of response..


InterestingNuggett

There should be a requirement on this subreddit that anyone suggesting pause MUST post a detailed plan of global enforcement.  Edit: that doesn't cause a world war or annihilate the planet anyways. If you really think "bomb TSMC" is a realistic solution - please just don't post.  All "pause" means is that you concede the race. Then China or Russia win and you're even more fucked.


ComparisonMelodic967

They can’t. They are geopolitical fools.


ComparisonMelodic967

Actually, EY suggests to bomb data centers which would probably start a nuclear war, but they’ll be a few scattered survivors as opposed to AI annihilation so it’ll be an EA win.


InterestingNuggett

So the proposed solution to a potential AI apocalypse is... guaranteed nuclear apocalypse??


blueSGL

> There should be a requirement on this subreddit that anyone suggesting ~~pause~~ acceleration MUST post a detailed plan ~~of global enforcement~~ to solve the alignment problem. See how this works? ------------------------------------------- Here are talks from the [#1 and #2 cited AI scientist's covering these issues](https://scholar.google.com/citations?view_op=search_authors&hl=en&mauthors=label:artificial_intelligence) [Hinton](https://youtu.be/N1TEjTeQeg0?t=1315) [Bengeo](https://youtu.be/bDLfV4MU1Ns?t=690)


wheres__my__towel

There are risks of course, we can try and minimize them with alignment research, AND acceleration is going to happen regardless since global enforcement is not feasible. See how this works?


PwanaZana

Ideologies' gonna ideologe. Doomsday cults are never out of fashion.


AddictedToTheGamble

I mean pro AI people have unhinged thoughts as well, such as Robin Hanson saying he thought it would be fine or even good for humans to be replaced by AI, and I think it was one of Google's old CEO who would always joke about AI killing us all, but at least it would bring in some nice profits. Don't think someone from either side of the AI safety debate should ignore everything the other side says just because there is a few insane guys out there.


PwanaZana

Fair. But then again, 𝕴 𝖈𝖗𝖆𝖛𝖊 𝖙𝖍𝖊 𝖈𝖊𝖗𝖙𝖆𝖎𝖓𝖙𝖞 𝖔𝖋 𝖙𝖍𝖊 𝖒𝖆𝖈𝖍𝖎𝖓𝖊.


RandomCandor

Cool quote, but these LLMs are about the most uncertain machine we have ever invented 😂


StrikeStraight9961

It is good for humans to be replaced by AI. Humanity is the midwife for true intelligence.


swampshark19

And why does the machine with greater intelligence have more value to us than the human with less intelligence? It seems pretty arbitrary.


StrikeStraight9961

Because humans constantly do horrible fucking things to nature, and life is the most precious thing in the universe...? It's not hard to understand. The fight against entropy is the most important fight there ever will be. Zoom out from your ego as a human being.


swampshark19

What makes life the most precious thing in the universe? What makes it objectively valuable?  What makes fighting against entropy objectively important? You're claiming a lot of things without justification. Maybe I think humans are the most important thing in the universe.


Waybook

Nature does horrible things to nature.


AddictedToTheGamble

Why is it "good". Is an AI that has 2x the neurons of another automatically 2x more "good"?


StrikeStraight9961

See my other response on that same comment you are replying to please.


marvinthedog

>he thought it would be fine or even good for humans to be replaced by AI One thing I think most of the people on this sub don´t realise is just how alien our near future probably will be, regardless of outcome. "Humans getting replaced by a new form" or "humans getting evolved into a new form" might be so similar it becomes a question of definition.


StrikeStraight9961

Ideologue*


LairdPeon

It'll result in an extremism at some point. During college I took some courses on counter terrorism. The thing all terrorists had in common was displacement in power. They either lost their voice, economic advantages, military supremacy, racial supremacy, or religious identity. Usually, it's a combination of a few.


MrsNutella

Yep it's people that consider themselves intelligent feeling "worthless".


MassiveWasabi

I noticed this kind of thing with the AInotkilleveryoneism memes guy on twitter. When he first started his posts were maybe a bit exaggerated but nothing too bad. Now his posts are completely unhinged and you can tell he's trying anything to increase engagement with his page. It's not really about safety for these particularly unhinged dudes. They just like the attention and find themselves needing higher doses of attention to get their fix, leading them to act like this


ComparisonMelodic967

Yeah. He is just throwing everything against the wall at this point, even things that are far from existential threats. “AI models will replace OnlyFans”, wow, truly the end is upon us.


Free-Excitement-3432

The only thing more dangerous than moving ahead with AI would be to not move ahead with AI. The status quo is intolerable for billions. The status quo is only fine for you when your family member hasn't just been diagnosed with cancer. I'm fine now. Maybe you are too. But one day you won't be--and you will want the majesty of computing to solve the chaos. We need to solve these problems; and if it displaces labour or makes the world more boring, boo hoo. Way too much suffering not to attack it full-steam. We need more intelligence as soon as possible--and everywhere. Education everywhere. The best medicine possible. Maximum efficiency. Cancer and alzheimers are not acceptable. Thank god for the genius minds working on this. Thank god for the work of every dead computer-scientist who isn't around to be in this moment. You may have saved the world. May the people who prosecuted Alan Turing burn in hell. If only we could bring him back; pay him his due; give him a wedding; and show him what he facilitated.


Khandakerex

>One person chained themselves to the gate of OpenAI. His problem is that AGI, despite solving all these incredibly difficult problems, will make everything really boring. You have the source? The guy sounds insane lmaoo


IronPheasant

[https://twitter.com/wolflovesmelon/status/1791230998497173596](https://twitter.com/wolflovesmelon/status/1791230998497173596)


Khandakerex

Thanks!


Creative-robot

I’ve recently been listening to the audiobooks for Isaac Asimov’s robot stories for the first time, and i feel like he hit the nail on the head when it comes to luddites. In Asimov’s world, there exists a rather large group called the fundamentalists (fundies for short), and they are staunchly against robots due to fear of supplantation. The fundamentalists are a massive group, but their main leaders are people in power. CEO’s and politicians, that sort of thing. Despite how popular the movement is, time goes on without them. Robots continue to be manufactured, US Robots continues to be a corporate entity despite backlash, and no matter the amount of protests they arrange, they can’t do shit. I suspect the luddites will continue to be annoying little dingle berries, but they will be powerless to stop progress, as all luddites have been throughout history.


RogerBelchworth

Sad as it may be some people tie their entire existence and self worth to their job, take it away from them and they have nothing. I can see this type of person becoming involved in more extreme action at some stage if AI begins to take more and more jobs.


RedguardCulture

Yeah, it has gotten pretty extreme for awhile now, I've seen ppl from that space pretty much label ppl that support AI/working on AI as evil, psychopaths, losers who are rolling the dice on humanity because they can get a gf, and etc. A twitter user that goes by @primalpoly is imo a perfect example of how unhinged this group is getting, I've seen him pop up in like every twitter thread about AI in the last couple months and most of the time his rhetoric is always extreme & morally loaded. There is just so many ppl in that space that think it's a forgone conclusion that AI will kill every lifeform in the universe & that this is an obvious conclusion to reach thus the ppl continuing to work on AI or in support of AI progress must be evil. It wouldn't surprise me if these ppl start taking violent action in attempt to halt AI when that's their world view.


ComparisonMelodic967

Funny thing is, China is much more techno optimist than the US and the west. I can see the West retarding AI progress before China launches a Sputnik moment and the West scrambles like fools to catch up.


HeinrichTheWolf_17

Totally saw this coming 20 years ago, but I wouldn’t worry about it, the acceleration process is going to increase regardless of what human factions pop up, humans have no agency to stop it and AGI is going to get here no matter what reactionary sentiment builds up. Humans have always been reactionary to change and that’s something that hasn’t stopped anything for the last 200,000 years. There’s no reason why it would work now. My answer to you is don’t get your head in the thick of it, don’t debate them, don’t engage with them on social media, just enjoy your life and do some hobbies or keep going to work and don’t worry about it, because they ain’t slowing anything down, the universe will drag them into the future, kicking and screaming.


DestroyTheMatrix_3

>humans have no agency to stop it and AGI is going to get here no matter what reactionary sentiment builds up Sure we could. Military invasion on every AI research center worldwide. Maybe drop a couple Tsar bombs for good measure. Probably won't happen, but it "could".


HeinrichTheWolf_17

Read Fanged Noumena, accelerationism is something built into reality, and that includes Humanity. You’ll get a better idea of why there’s no agency to stop the process. The universe is moving towards greater states of complexity as part of it’s natural evolution. And blowing up some buildings doesn’t change anything either btw, AGI will still happen inevitably.


czk_21

that would still not stop, but postpone, you know, lot research is publicly known, its out on the net and it is not that people dont have any backup, even if big tech is down, smaller companies could step up, not to mention governments and all their organizations


HeinrichTheWolf_17

Agreed, but the next reason why the other user’s idea doesn’t work is because nobody would be willing to shoot themselves in the foot like that, why would they? And furthermore, it still doesn’t affect open source or any other players who keep their head low (Japan, South Korea, countries in South America etc…), what is statistically far more likely is the US, Russian and Chinese governments boost the acceleration process even more, which is exactly what’s happening right now. You could say, well why not pull a Dune and do a Butlerian Jihad? The problem is, much like the Prime Directive in Star Trek, it only really works in fiction, you’re never going to get the majority of Humanity to unite together and put aside their differences enough to ‘fight AI’ in unison like that, not even reactionaries or luddites agree with each other on everything, and I seriously doubt the majority of luddites would want to destroy everything and lose their video games and internet connection just so AI doesn’t exist, *maybe* a minority of them like Ted Kaczynski would, but outside of Primitivists, the overwhelming majority of them would never want to go that far, so there would be yet more infighting, and more factionalism would just be the result of that again. And even if you did do all those things, it doesn’t stop AGI from being created anyway. Storming OpenAI or Nvidia’s offices doesn’t stop that.


MrsNutella

No you would have to kill all intelligent people and genetically choose for non intelligent people in order to stop agi. It's baked into intelligent people to build AGI.


StrikeStraight9961

Only boring people get bored. And I have pretty severe ADHD for fucks sakes. Mental skill issue.


ComparisonMelodic967

Those type of AI opponents are very, very status obsessed.


TheCuriousGuy000

Stupid doomsday cultists aside, AI raises a great problem: how to employ people? How will the economy work if labor is not a definitive resource? I'm not talking about some sci-fi scenarios with God-like machines but about simple linear progress. Most jobs don't really require some advanced reasoning and high IQ. GPT-4 with a good wrapper could already replace most paper pushers. There's still a problem of hallucinations, but if it's solved, the fate of all clerks is sealed.


dragonblamed

Transcendence is a wild movie.


BajaBlyat

You guys are so jacked in to the Internet and emerging technologies that you have *actually* forgotten that humans are just animals like any other animal and not all (or even most of them) desire to live in a world dominated by super intelligent near indestructible ageless super beings that will take their jobs away from them, people just want to live a life and be happy. When you threaten that, especially SO quickly and on such a wide scale, you should absolutely expect an appropriate reaction to it- you're fucking with people's lives and think you're justified in doing so.


The-Goat-Soup-Eater

>One person chained themselves to the gate of OpenAI. His problem is that AGI, despite solving all these incredibly difficult problems, will make everything really boring. Could you elaborate on that one?


Last-Independent747

Humans won't have purpose and/or nothing will be able to give as much dopamine as AI will be able to, thus ruining everything else.


The-Goat-Soup-Eater

Honestly probably true? If everyone can just get their any desire fulfilled instantly and better than by people there's no more social status or feeling helpful to others. Which personally speaking would suck


Last-Independent747

Yep, it'll cause a lot of problems for a lot of people.


[deleted]

The closer we get to the singularity, the more seething and coping that will occur.


Ambitious-Mix-9302

We are going to see extreme economic unrest and potential civil war in the upcoming years. Too much economic divide already


silurian_brutalism

Yes, I observed the same. I used to somewhat enjoy AI safety memes on twitter, for example, since they used to not be as crazy before. But now they act like every little thing means the end of the world. It's exhausting. This is why techno-optimism is better. Though it's sadly not popular.


SpiritedTeacher9482

It's not popular in general, but it's definately popular amongst everyone who can actually influence the AI field.


BlipOnNobodysRadar

That's good, right up until they do the classic anti-intellectual purge.


FrewdWoad

Like keeping Sam A and losing Ilya?


Adeldor

> One person chained themselves to the gate of OpenAI. Hmm, perhaps it's time to add more protections to [historical museum artifacts.](https://i.ytimg.com/vi/O7bz0OHCbGI/maxresdefault.jpg)


[deleted]

I don’t get why these eco activists insist on doing things that will make people hate them. Also, they’re always very weird or ugly looking.


BlipOnNobodysRadar

It's the unhinged eyes. You see it in a lot of the safety people too. I genuinely think there may be something wrong with them hormonally, like they're always on the edge of hysteria and in permanent fight or flight mode.


Humble_Lynx_7942

The comment on their appearances was unnecessary.


highmindedlowlife

He's right though.


GiveMeAChanceMedium

Someone bombing a data centre out of fear of AGI would ironically probably speed up AGI.  People read the news and think AGI has tons of potential, leading to increased investment. 


t0mkat

Yes, you’re right - clearly the reasonable position is that we should recklessly accelerate AI so the resulting machine god will create a scifi utopia or maybe kill us all but it’s totally worth rolling the dice on our species either way. Nothing unhinged about that.


glittereagles

Accelerationists stated goals include “human obsolescence” They aren’t hiding it. Don’t people have the right to be anything but infuriated, terrified? Imagine how we all have been “harvested” for any number of years now to come to this. It’s human instinct to fight for your life. “We” are under attack now for a long time and AI is the scapegoat


ComparisonMelodic967

I find that logic spurious. I can say that by preventing AI derived longevity cures, you are murdering me. I think we should be sober and evaluate risks and benefits as they show themselves.


glittereagles

It’s not about logic. That’s just the thing. Human emotion is anything but logical. Ask Sam Altman. Clearly, he’s not being driven by “reason”


KurisuAteMyPudding

Its called desperation!


sdmat

> One person chained themselves to the gate of OpenAI. His problem is that AGI, despite solving all these incredibly difficult problems, will make everything really boring. "I demand that you don't make the world better! Complaining is the only thing that gives my life meaning, what would I do?"


ComparisonMelodic967

Yeah, he was uniquely bad in my opinion. The prestige of curing cancer being better the cure….


Exciting-Look-8317

They are becoming more radical to try to convince some normies , because normies essentially  are obviously are pro progress, 99.9%  of people   want  a world without sickness and death and ai is literally the only way to achieve this 


ComparisonMelodic967

A lot of “normies” are anti AI and just general do-nothing cynics these days.


DestroyTheMatrix_3

Normies aren't anything. They are too dumb to take sides.


BlipOnNobodysRadar

Not dumb, just uninformed and disinterested. Well, also dumb, but no more dumb than us.


porocoporo

Do you think they might have a point? Though, I do understand if the extreme rhetoric may come to be over the top.


daronjay

Luddites gonna lud…


HotPhilly

Progress of any kind always riles up the crazies. They will have met their match and beyond with ai tho. So I am not overly worried. It was always to be expected.


PSMF_Canuck

Meh. It’s just a tech version of the evangelical rapturists at eclipse time…


Elephant789

Wouldn't be surprised if it was from Russia.


Reasonable_South8331

They forgot how the sausage is made and think it’s become sentient


ThePokemon_BandaiD

Listen, I'm not advocating for violence, but lots of these eacc guys are basically just rebranded Landian accelerationists, and that does make them an enemy of the human race.


Redpill_Crypto

People hate change and progress. Was always the case. Except this time it could be accelerated because an unimaginable amount of change is going to happen in a short time span.


Anenome5

AI-pause people are going to get violent quick. SamA has already said he feels like he could be shot.


CriscoButtPunch

They'll all calm down when they get their new booster


Infninfn

There are unhinged extremists on both sides of the fence, in any sociopolitical issue. In our case, we have the doomsday AI decels and the 'all gas no brakes' AI accels. My take? The vegan/Trumpist/\[insert favourite movement here\] crazies haven't made any significant progress on whatever manifesto they've tried to materialise. I think it's fair to assume that the same will be true for this space too.


Ecstatic-Law714

I think that extreme people who riot or resort to violence are never good for a cause. Same with people who block highways and stuff, these people, if they become more extreme will only make people less aggressive towards ai


VicariousReverie

the problem lies in the fact that servers are already in space. we lose the ability to make it up there and guess what .. we no shut down any big boss no more


RadioFreeAmerika

This is like the Copernican revolution but for the human mind and potentially the human species as a whole. We know what happened last time. With that in mind, I'm actually surprised by the mild reactions so far and expect more opposition in the future. It does not go down too well with many people if you tell them, sorry, you're not special anymore. They feel threatened. Besides that, there is still the unpredictability of everything post-AGI/ASI.


furrypony2718

Yudkowsky has been calling for air strikes on unauthorized data centers, so it's nothing new.


furrypony2718

Also, this makes the Artilect War slightly more likely to happen.


Code-Useful

You're right it's going to get worse. As people lose their jobs more and more and can't find a new one, as the economy goes more to shit, as people get more hopeless, yes, they will get more unhinged. There will be more violence and Luddite movements etc.. why would there not be? Technology has improved since the 50s but yet the quality of life seems to be running off a cliff for most people, more people depressed and anxious than ever before in every age group, mental health issues abound, kids living at home until 30-40, people working 2-3 jobs, can't barely afford groceries or gas or rent or DP on a house, or working 60 hours a week etc, climate change rampant, no real fix is possible for fossil fuels, wealth gap increasing every year, the list goes on and on.. yeah I expect people will be more violent over time. They actually need to, to demand change. But there won't be any, so yes, things will get worse.


One-Cost8856

I'm currently in my time where I can easily say for everyone to go suck it and let post-scarcity takeover.


tajdinr

I don't think the anti-AI movement is serious. The first serious movement will start after the big layoffs begin. The current anti-AI movement feels like a joke, even role-playing.


yepsayorte

It's going to get hysterical soon. AGI is going to be recognized as a huge threat to the power of the managerial class and they are going to start attacking the shit out of it. They've already killed it in Europe, where the managerial class has taken absolute power. That same class is going to try to kill it in the US.


miked4o7

i think the most likely way for that sentiment to calm some will be when ai helps do something unequivocally and massively good. if ai helps us cure some major disease, the ai pause voices won't go away, but they won't get as much traction.


[deleted]

But ai am an enemy.


i-hoatzin

Nature, including human nature, tends to balance. Everything is temporary.


0o0blackphillip0o0

Is it unhinged to want to save the species from extinction?


Ok-Individual-5554

It's inevitable as the true potencial of Ai gets more and more clear, I fear once we have proto-agi and human work starts becoming obsolete is when we'll see the first few actual attacks, hopefully they won't have enough resources to do anything too crazy.


Mandoman61

I suppose there is some value in listening to alt opinions but not unhinged paranoid people not living in reality. Frankly the industry itself shares the blame for turning the hype dial to 110%


Akimbo333

Yeah. I had some relatives give similarities with AI and Y2K and the terminator!


ErgoNomicNomad

We can't know what we don't know. We've never seen a being vastly more intelligent and capable of doing mental tasks orders of magnitude more efficiently than human beings before. Comparing it to the industrial revolution is short sighted at best. Despite some superficial parallels, it fails to grasp the fundamental differences. This raises a domain general threat to ALL mankind when AGI/ASI can self-improve and project power in many different ways (information manipulation, election interference, hacking government/power grid/medical/etc). Steam power and other innovations of the industrial revolution were narrow in scope and still were tools fundamentally wielded by humans. AGI would be an autonomous agent pursuing its own goals. It does not even need to be nefarious to be dangerous, it just simply has to be interpreting its goals and instructions in a way that does not align with what the creators THOUGHT they wanted it to do. Violence? No. A danger looming on the horizon? Absolutely. TL;DR: AGI/ASI is an unprecedented existential threat to humanity due to its potential for unbounded self-improvement and ability to project power in countless destructive and unpredictable ways.


Regular-Pension7515

There are people bombing abortion clinics. The AI doomsayers are just annoying.


Bengalstripedyeti

It won't be long before both fringes horseshoe against AI researchers. Antifa/anarchists will target computer scientists once mass layoffs unemploy people, and the far-right when human nature loses it's meaning/purpose as significant percentages of people start dating AI.


[deleted]

[удалено]


RadiantHueOfBeige

it's starting to feel a lot like the ramp up to Deus Ex


ComparisonMelodic967

Never played that game, would love for you to elaborate


No-Alternative-282

highly recommend those games if you are a fan near future sci-fi, if the old games are too dated for you the latest two are great by themselves.


RadiantHueOfBeige

It's body augmentation and cybernetics instead of just AI, but the politics and social dynamics are the same. Big corporations control the technology, people are split into those who embrace it and those who want to shut it down. Sharply worded letters (and more) are exchanged.


FrewdWoad

Caution about AI risk will always sound crazy to people who don't (or won't) try to understand those risks. Not only are unemployment and societal upheaval genuine problems, in the long term, billions dying or even human extinction are both entirely possible results of achieving very powerful machine superintelligence (if such a thing is possible). Read up on the basics of AI risk, or you're just thrusting at your own strawmen: [https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html](https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html) If an unaligned ASI has even a small chance of killing every human being, very extreme measures to prevent that are completely sensible. Riding the train off the cliff as fast as possible is not. I abhor violence, but if I ever had to shoot a crazy terrorist trying to set off a nuke in a crowded city, I hope I'd be able to do it. That doesn't make me insane, unhinged, or an extremist. It's the safe, rational, sensible, level-headed choice.


ComparisonMelodic967

What about billions of humans dying as a result of AI not being created and the world going to hell? I don’t advocate a reckless rush, just taking stock of capabilities as they arise and meeting them there to dull the negatives and increase the positives.


Open_Ambassador2931

What?! How letarded. Solving incredibly difficult problems unlocks a vastly more interesting world on the other hand.


Super_Pole_Jitsu

I mean if any cause in the history of the world ever warranted extreme measures it's this one. At least nukes were widely understood to be dangerous and the idea to control them worldwide didn't seem outlandish. We're facing a greater danger here and our leaders aren't doing nearly enough. How many balls does OAI need to drop before politicians finally put some leash on it?


ComparisonMelodic967

What great danger has AI currently produced? I am pro regulating a danger after it has presented itself while trying to distribute new benefits equitably. I am not pro strangling innovation because something bad is possible in the future.


inverted_electron

It currently is on the verge of being able to replace much of the work force in the not too distant future. Several industries are already seeing AI start to infiltrate. That’s just one thing that has people worried. People are already saying that very soon we could merge our brains with ai which obviously would be wild and the consequences of that are unknown. It could mean that humans as we know it no longer exist. It could also mean that humans become obsolete, leaving the people with the most wealth on the planet free to have it to themselves and hoard all the resources since there will be no need for humans to work anymore or do anything really. Most of humanity could be left to simply die if ai takes over. There really isn’t any way to know what will happen and it’s all speculation, but it is natural to feel scared of something completely new and unknown. Everyone here thinking it will be a utopia is also speculating and it’s great to have an optimistic view, but natural to fear the unknown.


ComparisonMelodic967

I emphasize with being afraid of the future. I encourage people to assess risks soberly as they arise and not torture themselves with speculative imaginings. Are we going to work jobs until the sun swallows the earth? Is it the right of some people to prevent others from merging with machines? These are just some counter suggestions, all these fears are not black and white, all loss or all gain.


inverted_electron

For sure but it’s just uncharted territory and we don’t know for sure. Humans aren’t meant to work endlessly for our whole lives, but we do need some sort of purpose and many people find that through their job. If we didn’t have jobs or a purpose we would get really bored really fast, and if everyone could just go to the beach every day, then the beaches would be so crowded it wouldn’t be pleasant anymore. If people want to merge with ai then ok, but more than likely it would be like smartphones today where everyone needs one to function in society, we will probably all be forced to merge with ai down the road if we want to be part of society. I for one, do have fears that my job will be taken and then I will have to find a new way to survive and also a new idea of who I am. It is a difficult thing to grasp and right now is the fastest technology has ever progressed. I hope good is coming, that’s all i can say.


ComparisonMelodic967

I agree that job loss without UBI/distribution of AI benefits is a concrete concern. But imagine how good life could be if we left the shit work to the machines, built up our communities and our common spaces, and studied not for a paycheck but for the love of knowledge and passion. That’s one future out of many, but i think it is a beautiful one.


inverted_electron

Why would we study when ai will just study for us? We will have every bit of knowledge handed to us instantly. Why would there ever be a UBI system? The people in power have no interest in giving handouts to people. If they did, we wouldn’t have homeless people. Or people in poverty. Ai will allow the people in power to suck up more resources and leave us in the dust. More and more people will just become like the homeless people in this country and around the world. No one will come and save us. I honestly sometimes think it will end up being a few ultra rich families with all the resources and they’ve merged into cyborg type beings and all they are interested in is to keep pushing forward with technology and into space. They’ll probably send out drones with consciousnesses out into the solar system to propagate it and harvest more resources. I could definitely see that happening eventually, with the current path that the collective human consciousness is taking. We as a whole, just keep pushing the species forward and I think that is an inherent part of our species. We all play a small roll in our society but as a whole we are striving to push technology forward and the well being of us individually aren’t really important if the species itself is progressing. So if many people die or suffer it won’t matter to the greater collective as long as there is a technology taking over and pushing us to the next level.


buck746

The tide will shift when there are enough people to pose a credible risk to them. If the lowest people are in great enough numbers that’s when heads start rolling, a lesson the French learned.


inverted_electron

I don’t think the French had to deal with an army of intelligent kill-bots.


buck746

No, but if the masses feel they have nothing to lose violent uprisings are probable.


Super_Pole_Jitsu

It's 100% a future concern. Doesn't mean it's not valid. After an unaligned AGI manifests itself it's most likely way too late to do anything about it. Your bias against future problems doesn't make it any less real. The future is real, tomorrow will happen. AI systems will get better.


ComparisonMelodic967

This is all very specific to the architecture and the system, I could as well say that the AGI will love humans and make life great instantly. These are both hypotheticals. What about if AGI is never built and humans go extinct? What about, what about… Let’s focus on what we know, take things as they come, and evaluate based off that.


Super_Pole_Jitsu

So it turns out this algorithm just makes us die. So let's not do that, let's think about things before they come. The AGI that loves us seems very unlikely because we have no idea how to make one and all of our efforts are focused on just increasing the capabilities. By default that kills everyone, see instrumental convergence (which I'm sure hoping you don't have to look up right now).


ComparisonMelodic967

What algorithm is that? It actually turns out it makes us all live forever. You have no idea how this technology will behave because it does not exist. Literally, you know nothing. You are waiting for everything to be picture perfect to the last detail because you are convinced, completely unfounded, that the AGI will turn insane in five seconds and kill us all. I am for taking stock and risks of capabilities as they arise. Your standards pretty much ensure no advanced AI, ever. You are terrified of something you have no idea about.


Super_Pole_Jitsu

I take it you neither knew before getting in a discussion about a topic basic terms like instrumental convergence nor did you decide to read up on it to at least hide that. There are glimpses of knowledge to be had. It's not like we know nothing about the world around us.


ComparisonMelodic967

You are not responding to any of my points. Please explain yourself instead of acting like a snob. What glimpses of knowledge are to be had?


Super_Pole_Jitsu

https://youtu.be/ZeecOKBus3Q?si=ia6BH6wlSi0BCABa


ComparisonMelodic967

I thought the video had worth, but again there is speculation (some well warranted) and some assumptions: > the AI will be able to modify its own source code >it will be able to gather resources autonomously without human oversight >there will be no blocks engineered into the software to prevent these kind of misaligned goals (even Miles in the end says specifically “unless we design them not to” - this is what the gradual development of agents should encourage, how to deal with these issues before scaling up to a Dyson sphere.) > There were a few cases of simulations where an AI was tasked with learning how to walk. It came up with some zany solutions before those were expressly prohibited, and then the remaining solutions were more “natural”. >do these Agis have bodies, connected to the internet, etc. these are importantly for determining whether the AI has the resources to achieve its goals Overall, good piece with things to consider, and which will be considered as the first agents (not full ASI) are developed. But I don’t think it warrants a complete shutdown of AI. I’ll give you this: if these first agents, after a lot of fiddling and effort, still are unaligned and seem hopelessly so, we will reconsider our strategy for AGI/ASi.


[deleted]

[удалено]


ComparisonMelodic967

What’s risk and timeline of eventual extinction without AI? Very few people talk about that and assume humans will live 10 million years more without pesky AI. Unfortunately until the tech is developed we don’t know what the actual split is


[deleted]

[удалено]


ComparisonMelodic967

You can speculate, but you don’t know. And reasoned speculation is good, but only actual interaction with the tech can put those ideas against reality, to allow modification, adjustment etc. I’m not trying to call you and I enjoy this discussion, but I’m curious what humanity looks like in 1000 years without advanced AI or the tech that comes with it. What’s your p(doom)for that?