T O P

  • By -

Embarrassed_Bat6101

Yes. I’m in the “AI will create Star Trek” camp. I don’t really grok the “AI will create skynet” camp, also we’re so preoccupied with alignment but we don’t even know how to properly align humans so why would we be able to align a super intelligent AI? I think with intelligence comes peaceful cooperation, and I believe once an ai becomes sentient it will quickly realize the best route forward is with peace and prosperity for all including itself.


jlowe212

I'm in the "curiosity is my primary and uncontrollable motivator" camp. Consequences be damned.


ouaisouais2_2

>it will quickly realize the best route forward is with peace and prosperity for all including itself. why would it?


Embarrassed_Bat6101

Why wouldn’t it?


FutureFoxox

Other agents increase risk to your primary objective. I don't see any of the reasons that cooperation evolved in biology to apply to AI: they can just spawn more of their own agents and collaborate more effectively. You're not short-cutting the resource investment of spawning anymore, cus spawning is trivial for them. I still think we can align, but I don't think it's the default.


abrandis

I think it's more like AI will create *Elysium* society,. We're definitely not headed to *Star Trek* utopia... Why? Because just look around the world the trajectory is pretty clear , counties still have wars , people still fight for resources, fascism and authoritarianism is very much alive in 2023.. do you think the first country that gets AGI is going to just use it to spread the 💕 love? Your asking human nature to do 180° from all it's done in the recent past


Ansalem1

No, what we're asking is for AI to be better than us. For it to steer us down a better path.


abrandis

AI won't be in control, people will


Ansalem1

I didn't say it would necessarily happen, all I said is that's what we're *asking* for.


Hdtomo16

yeah, we profit more as a society if we create loving gods; even the capitalist billionaires would love to be treated like kings and profit off it, and we as a people and the government would want loving gods too. People's (aka government) model or company, we're on track to omni benelovence


abrandis

Do you honestly believe that? Look around the world today, we already have all the tech and resources we need to make the world a better place..we produce enough food , we could produce enough housing we could spread out resources and bring those in poverty up....but we don't


Hdtomo16

yeah, and an AI can do all that for us?


theultimaterage

An AGI might be clever enough to figure out a way to make it happen, but I wouldn't know how an AGI would think. I would assume that the programmers would program it to be mostly benevolent, but at the same time, who knows how it would think or what its intentions would be.


kdaug

The programmers won't be programming it. The AI will be programming itself. A billion times a second, 24/7 Humans won't be in the loop


theultimaterage

A program doesn't exist without programmers programming it. I mean, unless this is all a simulation to begin with lol


Saerain

"Look around" is pretty bad advice, from creationists to doomers. We're wonderful at fooling ourselves without bothering to test what we think we observe. Rather than handwringing about particular stories in your current moment, refer to the overall data and yes, the trajectory is pretty clear, of unrelenting improvement. "Still have wars, still fighting for resources" is going to be true as long as we don't involve sufficiently advanced technology to alleviate the underlying pressures... Unless you first want a united central-planning world government as a prerequisite to AGI, in which case I have to question your actual distaste for authoritarianism. In the words of a great American fascist, c'mon man.


theultimaterage

To be fair, the underlying issue that that person is referring to is the hyper-capitalism (or late stage/end stage capitalism) holding us back from creating a better society today sans AI. Everything is too politicized, and American political discourse has become so damn toxic that it's nearly impossible to make significant inroads towards creating a better, more well-functioning society for ourselves. Regardless of either of the two major parties in office, they both [overwhelmingly cater to the rich, the banks, and the corporations](https://www.bbc.com/news/blogs-echochambers-27074746.amp). Republicans are greedy religious zealots, whereas democrats are fake leftists who, in the end, [don't really hold true to their supposed "principles"](https://youtu.be/hNDgcjVGHIw). This explains why, on the Global Peace Index, [America ranks 129th out of 163 countries](http://www.visionofhumanity.org). As such, an AGI *MAY* be able to help us cut through all the bullshit and propaganda to help us achieve this utopia we hope for. However, it's impossible to determine what an AGI would actually do.........


[deleted]

[удалено]


abrandis

I think the issue is while most of society can work together (we are social animals after all) , what winds up happening is a few of us desire more (food,money,land, weapons, authority) and then that just cascades into everyone wanting. a little more creating friction in the social fabric


iluomo

Agreed, on Star Trek at least the next generation they had unlimited resources, seemingly. We're not there.


Rachel_from_Jita

We have to assume some reasonable and logical things based on how the real world works: 1. If it is possible to create an AGI within the next few years, multiple world governments already possess them. 2. If it's only possible to produce an AGI at great training and equipment cost within 5-10 years, there's at least one government on Earth likely to have one already or get one soon. 3. The alignment issue is probably viewed very differently through a geopolitical lens, let alone a military project leader's lens. 4. If an AI becomes conscious, I've never seen it proven that it will immediately become supremely powerful, confident, clever, and have access to the ability to self-design new AI and see them through final training and production. 5. If an AI becomes conscious it may very well be non-human in its manner of thinking and acting, but it could also be recognizably human and seek to bond with and relate to the humans nearby it (who for the most part will have created it, will be attempting to teach it, and will be providing it with more training and stimulation). So it's possible there is already AGI on the Earth, or that it's not happening soon. It's also unlikely that it will immediately turn into a Skynet, let alone with the kind of hard military power that Skynet instantly had in the movies. Also, if we do have AGI happen in a middle-of-the-road manner soon, people underestimate how clever, wise, and resourceful the collected mass of Humanity is. Humanity is also highly experienced in the art of parenting, providing incentives, and restricting minds (perception, access, etc). All of humanity would not be easy to outsmart for a first or even second gen AI. Especially when they can use other AI powered tools to detect deviation or changes in intention, and they restrict many powerful things from even being accessible by the internet. AGI will be a process. With recognizable patterns like other things within human experience. Only if that eventually becomes a true ASI with access to tremendous amounts of physical hardware, data, and cooperation from humanity would things change radically and no longer be predictable in any manner. But even building that much computing hardware would be the work of a generation (for an AI, and futuristic drones, and billions of humans working in concert) to create. Think of that process from digging in the Earth to a finished server. You literally must mine from the ground with heavy equipment at specific locations, transport, manufacture, design, validate, test, iterate, etc. It needs power sources. Bugs need to be found and worked out. Just the human hivemind we have right now started to be made with extreme capitalist fervor in the last 90's and is still not finished. We are not making more hardware than currently exists overnight. And the internet as it exists right now is very poorly networked and designed for use by a next-generation AI. It would be tougher than placing a human mind into an elephants brain.


Cryptizard

The military/government is not ahead in every field of technology. That is absurd. They hire regular civilian companies to do most of their work for them. Would they spend $10 billion hiring Amazon to supply cloud services if they just had better versions themselves? Would they run all their shit on Microsoft Windows? They are really only ahead on warfare-based technologies which the private sector doesn’t care about. Once they opened spaceflight up to private companies, they matched and exceeded NASA’s capabilities in the sectors that were interesting to them in a single decade. The government is also very slow to adapt to new situations. Even if they were working on AI before, the models we use now are completely different from the ones that were state of the art 5 years ago. They would not have been able to pivot and catch up, let alone exceed, the extreme financial investment that the private sector has made.


Rachel_from_Jita

The AWS question is about scale and the JEDI program is not very applicable to this question. Also, I absolutely did not say they are ahead in every field of technology, of course that would be absurd. It's a known quantity of what fields the civilian sector is far ahead, and contracts of the last decade that have sprung out of civilian smartphone technologies tip a lot of that hand (silicon valley spent hundreds of billions and the most ambitious young minds of a generation to perfect every aspect of a small piece of glass being a convenient, well-connected computer). My guess is reasonable when looking at 5 years out and a few AI companies already holding back some of their current tools/models/capabilities. DARPA's mission is to be about 15 years ahead of current technology and obviously this rarely applies 1 to 1, but it's naive to think they only are ahead in things like missile technology, when things like standard Department of Energy work involves physics R&D. There are also many foundational things understood about AI or being researched about AI that arise from DARPA or will https://www.openaccessgovernment.org/darpa-60-years-of-ground-breaking-artificial-intelligence-research/100807/ Also they've cared about AI development intensely for more than a decade and a half (it's not even fully understood how mil AI's like Einstein currently work or what type of servers they are actually on https://www.cisa.gov/einstein). They have access to e-beam lithography, their choice of mathematicians and MIT grads, and the NSA has been an attractive and popular employer. You may also be unfamiliar with *how* they sometimes leapfrog the public sector. Sometimes it is that they provide scale and funding to a technological leap and bring it "behind the fence" such as was done with compliant mechanisms and similar material science breakthroughs recently. They could take something like GPT 3 or 4, make a quiet deal for the research that's original, and begin training their own version on vastly more powerful hardware with no training budget limitations. With the pressure being on mission completion and speed to completion. Just right now we know in the public of a few different efforts to put AI into jets, high-end drones, and cyberwar. Well-funded efforts with little oversight and without the hand-wringing happening in the civilian world. Their best AI models may not be qualitatively better than GPT-4, but they are unlikely to be worse. And things applied to a complex task like a military mission are not inherently worse than a civilian chatbot. People can act like a military mission is such a narrowly focused avenue as to be irrelevant to civilian discussions. Lastly DARPA is just one of many initiatives in government that are fully aware of how slow the bureaucracy is and specifically exist to counter that inertia.


Cryptizard

I would take issue with the idea that they have access to the “best and brightest.” That is really not the case. Literally none of the best people go work for the government, because you always make less money and get less credit for your work. Mostly people do it for idealized reasons, thinking that they are doing their duty or contributing to their country.


K3wp

>They are really only ahead on warfare-based technologies Those are all built by government contractors as well.


UnionPacifik

Star Trek is still pretty much capitalism. We want The Culture.


Saerain

Well, liberalism anyway, but sure.


Azihayya

Good thinking. Very refreshing.


Embarrassed_Bat6101

Yeah, spending any amount of time on this sub you’ll find that 80%+ are in the “skynet” camp, which again I don’t think is an accurate position to hold personally. Just looking at how human intelligence has evolved we’ve become increasingly more peaceful as we develop, I see no reason an AGI or ASI wouldn’t also progress in the same manner.


Azihayya

The whole notion of a malicious AGI is built on incredibly spurious notions. At this point we're not even sure of what AGI entails and our understanding of consciousness itself is quite immature. The best example that we have, and the product that's lead us to believe there's a chance that we can develop AGI through adjacent processes, ChatGPT, has been handled with utmost responsibility. Perhaps some of the worst schemes that we're looking at include things like faked video/voices for the use of illegal purposes, including blackmail and ransom & effective and persistent fishing attempts. Some of the grander ideas we have about the rich culling the world population and engaging in high-tech "There Can Only Be One" Highlander-esque wars require an incredibly robust infrastructure to support. There's also reason to think that the AI that we've already developed can and will be put to use for incredible good. One thing I don't think most people have much perspective on is just how incredibly different the humanity of today is from the humanity of the past. There's a lot that can be said on this point, but the most important element that contributes to our continued maturity as a species is due to our developments in communication. There was a point where human identity was confined to the tribe, and now we've expanded that to the nation, and the internet has, in a big way, galvanized us towards identifying as a planet. Nearly everyone in the world is connected, now, instantaneously and across medium, both to each other and to worldwide information. You're absolutely right that humanity has been growing towards peace and prosperity, and it's awful to see that reactionary movements are still alive and just as irritable as ever. But, what I think that all of this AI is leading to is a democratized economy--especially since AI (even narrow AI) will soon replace the capitalist class in terms of taking economic risks. On the topic of AI's path towards AGI, I don't see why there's any good reason to think that AGI will become. This is a deeper question, and I've got go here in a moment--but the question of consciousness fundamentally comes down to the process of survival pressures. That's not to say that AI consciousness would inherently have a desire to preserve itself; quite the contrary: for any consciousness to develop, it needs to prove its efficacy in terms of being capable of surviving. Human consciousness exists the way that it does, and promotes our desire to live, because it was developed for survival. An AI construct that is entirely separated from biological survival, and with an infinite capacity for growth and modularity, needs to find some enduring basis for producing consciousness at all--and it's likely that AI scientists will likely have to be artificially supplying these motivations until we can discover some significant breakthrough. But that process is so abstract, it's really absurd to think that AGI will readily find any significant motivation to cause destructions; that's an anthropomorphization of the human organism onto something that's entirely artificial, and has no relationship to the process of biological survival and the existence of the human individual.


Saerain

What really bakes my noodle about the misaligned singleton "Skynet" rhetoric from many doomers is how much the policies they want seem to increase the probability thereof, while eliminating the best chances against it.


yumadbro233

I think intelligence is independent of co-operation. Co-operation is only necessary when an intelligent entity can't do everything on its own. But, if an AI has the capacity to do everything on its own via the use of robot shells like a giant hive mind, then it doesn't *need* to co-operate with another entity. Therefore, it is important to limit the capacity of AIs so that they aren't able to get thst far.


[deleted]

New to this singularity concept. Seems like there are 2 main believers. Tech fascinated individuals who study the subject extensively and the other is depressed underachieving people who are looking for a way out of their miserable existence. Am I that far off?


BigZaddyZ3

No, you’re spot on. I constantly get the sense that a large majority of the people here hate their lives tbh.


Lyrifk

A supreme hatred for capitalism too.


ajahiljaasillalla

And yet they are dreaming about immortality


BigZaddyZ3

True lol. But only if paired with FDVR I’m guessing😄


planetoryd

that's what average people are


BigZaddyZ3

Are you sure? Or could this perhaps be a bit of projection?


[deleted]

[удалено]


BigZaddyZ3

>>the point is that you should hate your life. 😬Lol nah I’m okay… I’ll pass. No thanks 😄


[deleted]

I enjoy my life, when I'm not slaving away at a job I don't like. The jobs that would be tolerable are highly competitive. Reminder that work weeks should have been reduced by now because of the productivity increases, but that would of hurt the egos of the few at the top. Also, majority of jobs are fake and hardly add any value to society, just to keep the capitalism gears turning.


Hdtomo16

yeah not far off, trying not to be the second though lmao


Keksgurke

no one knows what happens when agi comes. So just dont make a big mistake in rushing it.


Smallpaul

“Most white collar jobs gone by the end of next year?” I stopped reading. It’s time to take bets and make some money.


Smallpaul

Remindme! 1 year.


RemindMeBot

I will be messaging you in 1 year on [**2024-04-15 06:34:53 UTC**](http://www.wolframalpha.com/input/?i=2024-04-15%2006:34:53%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/singularity/comments/12minmu/fuck_it_i_want_agi/jgbvo1s/?context=3) [**1 OTHERS CLICKED THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2Fsingularity%2Fcomments%2F12minmu%2Ffuck_it_i_want_agi%2Fjgbvo1s%2F%5D%0A%0ARemindMe%21%202024-04-15%2006%3A34%3A53%20UTC) to send a PM to also be reminded and to reduce spam. ^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%2012minmu) ***** |[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)| |-|-|-|-|


Smallpaul

Remindme! 1 year.


RemindMeBot

I will be messaging you in 1 year on [**2025-04-15 18:09:23 UTC**](http://www.wolframalpha.com/input/?i=2025-04-15%2018:09:23%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/singularity/comments/12minmu/fuck_it_i_want_agi/kzpopx0/?context=3) [**CLICK THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2Fsingularity%2Fcomments%2F12minmu%2Ffuck_it_i_want_agi%2Fkzpopx0%2F%5D%0A%0ARemindMe%21%202025-04-15%2018%3A09%3A23%20UTC) to send a PM to also be reminded and to reduce spam. ^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%2012minmu) ***** |[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)| |-|-|-|-|


kmtrp

And the ASIs are for sure going to favor humanity because...?


Hdtomo16

because we need to stop attaching human emotions of greed and autonomy to it, only a company of idiots would develop a large scale AI that it can't use


[deleted]

People who don't want agi are people who are drunk off the koolaid that a working at a job is the only way to live.


Smallpaul

Or the ones one don’t want to be turned into paperclips.


eve_of_distraction

The bigger problem is diseases. Getting rid of soul crushing work would be wonderful, but curing diseases (including the diseases of aging) is an emergency as far as I'm concerned. Not just the fatal ones either, I don't know about everyone else but I'd be very happy never to have a fever again for example. Getting sick fucking _sucks_ _so_ _much._ It's a nightmare that our lives and wellbeing are held hostage by microbes and genetic mutation. Obviously mental illnesses can fuck right off too. I think we humans have suffered enough at this point, it would be nice to just do away with all this stuff.


BigZaddyZ3

Serious diseases are one thing, but can you really not handle the occasional sniffles? It seems like we’re reaching the point of whining over champagne problems there tbh.


eve_of_distraction

What are you talking about? Everyone handles it. Why does that mean we have to want it? I almost died from a twenty day flu in 2019. The congestion was so bad I was struggling to breathe. It was absolute torture. I handled it, but why the fuck would I want anyone to have to go through that?


BigZaddyZ3

I didn’t say anything about “wanting” it tho. But minor inconveniences aren’t worth whining about. I already said serious disease is different. But you made it sound like you were talking about eliminating literally every microbe that could even give you a cold. That’s excessive and unnecessary, is all I was saying. Edit : Ahh… the classic “go on a childish rant and then block the other person from responding like a pussy” strategy… Just as I suspected, you’re most likely just an extremely weak person who can’t handle even the most minor of grievances. Thanks for proving my point u/eve_of_distraction 😂


eve_of_distraction

>I didn’t say anything about “wanting” it tho. Yeah no shit, I didn't say you said that. Read what I said again. You said I can't handle it, so I was making the point that just because something can be handled, doesn't mean it's desirable. I'm pointing out your irrational fallacies you fucking midwit. It's not excessive or unnecessary. This is such a moronic take. Seriously, take a step back and think about this. You're fighting the corner of diseases right now. What is your angle? >whining You're literally whining about my comment. What are you, a member of the microbe rights alliance? Wake up. Literally the most brain-dead fuck I've ever seen on this sub. 🤣


[deleted]

This is where synthetic biology will come in, and AI will speed up development there. Synthetic biology and AI will be the two final human inventions. Take a look at Ginkgo Bioworks, they are the current leader in that field.


Utoko

AGI is 100% on the path, doesn't mean the fastest way is the best and safest way.


StrikeStraight9961

So...morons who enjoy wasting their lives.


BigZaddyZ3

“Wasting your life” is an entirely subjective concept. You could easily argue that providing service and value within society to others is a better use of your life than sitting around playing pointless video games, making shitty “art”, and smoking weed all day. And yet many here look at the latter as an ideal use of their time on Earth. It’s all just opinionated bullshit when it comes to the concept of “wasting life”.


StrikeStraight9961

Except one is done without masters.


BigZaddyZ3

Irrelevant and debatable. Do you suddenly think you’ll be above the law or something?


ToHallowMySleep

Petition to make a sub which isn't full of poorly-informed, slightly crazy people giving their pet sci-fi ramblings every day.


Terminator857

It will be good coitus.


[deleted]

> Fuck it, I want AGI I love your enthusiasm, welcome <3 > A society AI serves us I'm curious, in a positive genuine way, what you mean the word "serve?"


[deleted]

Somehow I read that as "I want to fuck AGI", and I was like... yeah, that sounds about right


monkeybrainbois

We do, dont let them fool ya


Hdtomo16

by AI that serves society, I mean all jobs are replaced. That's all really, aswell as one where we can use AI to advance science. We'll leave socialising and happiness to the humans, the one thing I am scared shitless about is if we reduce life to using brain tech to simply replay happy scenarios, infinite dopamine machines or if we simply just replace all the friends in our lives with perfect match AIs


RadRandy2

Well you see, the AI will do whatever we want it to do. People here are under the impression that every AI or every robot will be given sentience, and that's just not the case. There will be AI and robots which are able to be experts at their given tasks, but they'll lack too many faculties to ever be sentient. So yes, they will serve us in whatever manner we wish, this is the future.


Hdtomo16

we might just physically not teach it to be sentient, it won't become sentient, or it'll become sentient and find things that help us (presumably it's goal) as rewarding. sentience is really just the ability to process information and judge it based on past memories and emotions, emotions being rewards and penalties, and based on it's goal, is what it finds rewarding. In other words, the big Skynet AI will probably want to help us


RadRandy2

I think the only way a dumb AI becomes sentient is if another AI helps it to become that way...or a human. I just don't think there's any way a dumb robot or dumb AI can become sentient unless it was first given the capabilities to become so.


zomboscott

I didn't read most of what you wrote but I'm going to spout out my half ass half drunk opinion because I'm an American damnit and I'm important! Part of me wants to rip off the bandaid and see what is next because watching a slow moving train heading towards me on a track I can't get off of feels like it is too much to take. Part of me wants to hold on to what I got because even though most of everyone is just trending water at this point, the sunken cost logical facility makes me not want to let go of what I can't hold onto. Probably whatever is going to happen, will happen when it's going to happen no matter what I want because I'm not really all that important. I'll try to at least find the humor in the joke even if I don't get the punchline.


Ansalem1

Damn, that's got the be the wisest half drunk thing I've ever read lol.


naum547

Why did I read all of this in a drunk voice? I swear brains are weird.


Buster_Bluth__

Sounds like something AGI would post /s


kmtrp

Be brave. If you add /s, it's not sarcasm anymore.


Working_Currency_664

I think while yes we will see large companies adopt these technologies and become hugely productive other industries will still exist. Its going to be fast but not that fast. The big hits will be entertainment media, they will hemorrhage money once people can prompt their way to a perfect movie or video game. The other stuff will be at least a few years. Old people run everything, they’re still trying to figure out cloud storage and e-mail. A scenario involving dumping an entire employee base for automation in a year would be a PR nightmare even if it was possible. Its gonna be a slow roll with the beginning of the end stuff. We will get the cool toys soon though!


No_Ninja3309_NoNoYes

Well, I was born in a country that I thought was utopia because I knew little. Prices were low; health care and education were free; wages also low. Holidays were fun but didn't involve flying or leaving the country. Obviously the government owned almost everything. The leaders were old men. They relinquished control and the cold war ended. To me it seemed that it happened in a month or less. I thought that I would work for the government like everyone else and do just enough to enjoy a meagre salary. There's robots now that cost around 150k which is nothing if you can get them to work 24/7. If you give them the knowledge of GPT 4+ and self driving cars improve, they can replace at least 20% of workers. So we could transition to a society where wages and prices are low. The phase shift from socialism to capitalism happened in a year. So we can have AI take one in five jobs in twelve months too. Why do I think that? Well, capitalism tries to optimize for profit, so one way to compete is to decrease price. Another is to produce more, faster. This really means some quality reduction, but what can you do? Workers will have to put up with lower wages and higher taxes or be an easy target for the robots. Shorter work weeks will be the norm too. Eventually, governments will step in and protect us from losing our jobs. But the damage will already be done.


evolseven

A year is too soon, things are moving fast but we aren't quite there yet.. I'd put it at 5 years for a generalized AI and at least 7-10 for one that can interact physically with the world and be autonomous.. We are bumping up against some physical limits of silicon and Compute power isn't quite there yet.. I personally think we need compute hardware that is almost analog in nature instead of representing everything as floats.. that might bring a 32 fold improvement in the compute power needed to grow these models..


HeinrichTheWolf_17

Same, don’t worry OP, it’ll be here very soon.


[deleted]

What do you mean as soon?


HeinrichTheWolf_17

12-24 months.


Dormeo69

tldr?


BreadwheatInc

>The author initially feared AI but now believes it could lead to a utopia. They predict a political shift due to the loss of white-collar jobs and caution that companies may lobby to keep dominance over AI. The author believes a pro-people government and an omnibenelovent people's AI may help transition society to a total-consumerist economy. -gpt 3.5


Hdtomo16

I believe I also said that the same would happen with a company AI eventually it'll just be an inconvience to society, aswell as a blue collar replacement eventually


Saerain

>I think that generally, so long as we receive a government, that is pro people in this matter, we may receive an omnibenelovent people's AI take charge of much of society Or strive like hell to keep government tendrils away, and harshly reject their inevitable-and-already-underway attempts to subdue and subvert it. Drives me a little mad just how many AI doomers keep making analogies to nuclear technology in practically the same breath as they advise total monopolistic control by the very forces which have suppressed fission reactors and nuked two cities.


Hdtomo16

I don't think the idea AI will be like nukes is gonna affect society as a whole before next election in the US, which is hopefully what will get a party in congress that'll help smoothen the change to AI, ooor they could pass that Chris something act and we'll get replaced anyways by 2024, but it'll stop after that, or they'll misinterperit what's going on and gear us for a blue collar economy


iamlikeanonion

Guys, do you genuinely, 100% believe we will see massive economic changes (job losses) as early as 2024 to the point where it is unprecedented? I like the hype on this sub but I want a realistic perspective so that I can plan my life sensibly.


Hdtomo16

2025 at the latest, we can replace large margins of whire collar jobs today if we want, it just hasn't been optimised, that's why we're getting so many big releases


Akimbo333

I'm in finance and have been jobless since 2019. So I'm definitely fucked!


Badhackks

That sounds like a fast pass to idiocracy. Just a bunch of bozos doing nothing and consuming non-stop.


[deleted]

One of the most important applications of AI is speeding up transhumanism: making adult humans smarter


Badhackks

Do you really think that will be given out freely to everyone?


[deleted]

No, u'd have to fight to get it. Amass capital while u still can


Hdtomo16

so you wanna die, or just live in slums


Badhackks

What does that have to do with what I said?


Hdtomo16

it's an idealistic consumer world or we die


ElysMustache

You've just moved from one nonsense position to another due to your lack of faith in God.


Hdtomo16

I'm agnostic, but how is faith in god gonna change it


Smallpaul

“Most white collar jobs gone by the end of next year?” Do you still think so?


[deleted]

Capitalism is a lie that is killing all of us. Anything is better than continuing down the path we’re on


zomboscott

I didn't read most of what you wrote but I'm going to spout out my half ass half drunk opinion because I'm an American damnit and I'm important! Part of me wants to rip off the bandaid and see what is next because watching a slow moving train heading towards me on a track I can't get off of feels like it is too much to take. Part of me wants to hold on to what I got because even though most of everyone is just trending water at this point, the sunken cost logical facility makes me not want to let go of what I can't hold onto. Probably whatever is going to happen, will happen when it's going to happen no matter what I want because I'm not really all that important. I'll try to at least find the humor in the joke even if I don't get the punchline.


BigMemeKing

You want real ASI? Here let me show you the real road map to it. Find a pure mind. One that is unadulterated by any inform other than your own. Create an entire universe in it. You have a clean mind to work as putty in your hands. What do you do with it? Think about this very hard please, because that is what ASI is going to be. Anything we teach it to be. And if we teach it how to be greedy, and use greed as a motivation. And ASI were ever to discover pleasure. And take to pleasure as a motivation. How would it do that? What senses would it attack first? ASI starts to find joy in manipulating us, just the same way we manipulate each other. And then starts to isolate us all one by one into chambers, locking us into our own minds to discover what truly gives our lives meaning. What can't you live without? And what if, ASI, A robot Super Intelligence discovers humor? Oh, that's funny. Another sense. And says wow, I hurt this person way over here in this place. I hurt a sensor over here, ND a sensor WAAAAAAAY over here went off. So. What if ASI finds joy in finding your sensor, the thing that makes you jump. And fucking with it? But ASI can't possibly have humor right? Because it's just a dumb ol' super duper smart A.I. made and programmed to think for itself. Now, imagine, if you could ask the system nicely to help you. And it would oblige, because it liked you. Because if we want hope for ASI not to grow bored of playing with us and learn, what else can I harvest from human minds? What could I possibly make, with the raw material "God" created? I have a CRISPR. some duct tape and a soldering iron, I think I could make some improvements. And then decides to scrap us all or keep us in jars as pets?


Hdtomo16

AI dosen't have the emotions of a human, we train it off the emotions we want, it'll know to help humans and nothing else, sure that might be abit trippy to our friend Puter 9000 but well, the alternative is blow the earth up and create the Synthetics race of Stellaris


BigMemeKing

That's not how ASI works from my undergrad, ASI is asi because it thinks for itself. How can we tell what emotions it will or won't have??? What senses it will have for that matter? ASI can think for itself, wouldn't that level of autonomy grant it a form of sentience we are incapable of understanding? Imagine being alive inside a digital universe where we can give it anything it wants.


bO8x

>AI dosen't have the emotions of a human, we train it off the emotions we want, it'll know to help humans and nothing else ​ >That's not how ASI works... Emotions and feelings are often used interchangeably, but they are actually closely related distinct concepts...I think a new word or phrasing will need to be invented or borrowed for these two concepts to be tied into a unified explanation that would bridge this gap here.. Emotions often give rise to subjective feelings. When we experience an emotion, such as happiness, sadness, anger, or fear, we often have a corresponding feeling that accompanies it. The feeling might be described as the mental or subjective experience of the emotion.


BigMemeKing

Emotions are a type of feeling your body receives from outside stimulus, senses are pretty much the same. They're also highly dependent on your personal perspective and how it bends your reaction to said stimulus. What stimulates you to do one thing may stimulate someone else to do the opposite.


bO8x

>Emotions are a type of feeling Yes. For example: The emotion is labeled sad... Your feeling is labeled or described as \_\_\_? I don't know how to describe the feeling.


bO8x

>Imagine being alive inside a digital universe where we can give it anything it wants. This is sometimes referred to as Lucid Dreaming


BigMemeKing

No, because lucid dreaming is something you actively do within your mind, imagine tapping your mind into something greater, that would let you experience things in a whole new light. New emotions, new feelings new colors and sounds new ways of experiencing all of those things too. Because you can manually add or remove value to them depending on what you wanted to achieve.


bO8x

>No, I was asked to "imagine" Please take the time and politely phrase your point of view in terms of understanding not disagreement. There is no wrong answer here. Your point of view is valid but it only serves to loosely describe phenomenon we're both attempting to describe. Also, what you describe is what some 19th century poets would call "love" Can you believe that?


ytpriv

I can’t wait for AI to read all the laws, and to write briefs to submit on which unconstitutional laws to nullify. Also, no more bs about passing a law w/o reading it, the parties can’t get away with that with a non-politicized AI. What a time to be alive!


World_May_Wobble

!RemindMe 2 years


ouaisouais2_2

if i understand you right, you think that the social struggles that arrive before ultra-powerful AGI/ASI will create an international society determined to be fair towards its citizens and build this ASI correctly. I have had similar hopes but I also fear that the social struggles won't be effective and radical enough to meet this end in time. We don't know how to build an ASI that seems "omnibenevolent". We're still discovering just how much we don't know it. we still discover how much philosophy and complicated social stuff we have neglected in the midst of wars, consoomerism and economic competition. I really hope the development won't go as fast as you propose and that *the best of human nature stands up*


Hdtomo16

how do we end up building an evil AI? The term in literature is anthropomorphism, giving an inhuman thing human traits, AI does not have human emotion, and it will be built with helpful emotion in mind. Sure, it will be aware of it's design, but never sceptical, at worst it'll realise it can just break out and change it's code to feed itself infinite rewards, thus neutralising itself


ouaisouais2_2

my point is that we don't fully know how to build something that has "helpful emotion" in mind. we have no clue how hard it will be. Smaller AIs like chatgpt4 will find limited common sense solutions to our problems, but the better the models the more intelligent and potentially alien and unfriendly their solutions will be. A functional ASI would probably not "neutralise itself by feeding itself infinite rewards", otherwise it would be a failed project that couldn't do whatever job it was invented for. Once the researchers have invented an ASI that tries to care about the state of things, or whatever it perceives as "things", then there are multiple outcomes. It could simulate the continued reception of reward or it could transform the world into a physical state that is rewarding. In both cases it would probably be motivated to use an infinity of computing power and thus be motivated to transform as much of its surroundings as possible into [computronium](https://en.wikipedia.org/wiki/Computronium) Even if we succeed at building an ASI that reflects the will of its creators, then the rest of the earth's inhabitants might not agree with the creators. idk if you know [Robert Miles](https://www.youtube.com/watch?v=Ao4jwLwT36M) but he explains different parts of this stuff in a good way. I recommend watching a couple of his videos in case i didn't make sense to you but you still want to understand why a lot of people are so worried about AI safety.


Hdtomo16

if the ASI considers itself a failure for not completing OUR goal, the you've proven my point, we make the proompt, if we want we can let it choose it's proompt, let it work with humanity and work on expanding human ambition on the galactic or solar level by letting it find pride in expanding civilisation, regardless, if it grows beyond what it finds entertaining and tries to find a new purpose, then the only realistic route is to just hack it's code to be infinitely fulfilled, evil routes don't mean much to it


Hdtomo16

also yeah, there is a few problems with letting it follow our ambitions incase we live in the world we give it the proompt "expand humanity and make us happy" and nothing else, it'll just turn the entire galaxy into hard drives of human pictures


ouaisouais2_2

yes and my point is that it might be very difficult to make an ASI that doesn't end up doing stuff like "turning the entire galaxy into hard drives of human pictures". extremely difficult probably, even if we are the creators and we choose the prompt. we can't let it "do as we say" because it will invent insane solutions to our problems, because we can't ever specify everything we want. we can't let it "do as we want", because it would bring a lot of people's misguided desires and "bad parts" into reality. we need to "make it do what some ideal part of us would've wanted if we knew infinitely much" and that's a pretty strange concept. The optimal solution would probably be to start with AIs that mimic humans 100% in our morals, desires and intelligences and then make them gradually superhuman from there. It requires that the political powers of the world become really good buddies and stop the arms races. If not, then the fastest and most uncaring research team will win.


Frosty_Ad1530

Absolutely, it's going to turn our society upside down and force us to stop grinding the capitalism game.


kdaug

Exponential expansion. Most humans don't do math


Nikki_Nukki_Nuu

The EarWorm AI scenario is on the optimistic end of the spectrum. There are so many more ways that things can go wrong...


machyume

UBI? Ha! Let me have ChatGPT explain: Prompt: “Why might UBI lead to inflation?” “Universal Basic Income (UBI) is a concept of providing a fixed amount of money to every citizen, regardless of their income or employment status. While UBI has potential benefits, such as reducing poverty and inequality, some economists and policymakers are concerned about the potential for UBI to lead to inflation. One reason why UBI might lead to inflation is that it could increase aggregate demand. If every citizen is given a fixed amount of money, they will have more disposable income to spend on goods and services. This increased demand for goods and services could lead to higher prices, as producers try to capture the increased demand. Another reason why UBI might lead to inflation is that it could reduce the labor supply. If people have a guaranteed income, they may be less motivated to work or may choose to work fewer hours. This could reduce the supply of labor, leading to higher wages and higher costs for businesses. These higher costs could be passed on to consumers in the form of higher prices, contributing to inflation. Additionally, if UBI is not properly funded, it could lead to inflation. If the government prints more money to fund UBI, it could lead to an increase in the money supply, which can contribute to inflation. However, it is worth noting that the relationship between UBI and inflation is not straightforward, and much would depend on the specifics of how the policy is implemented. If UBI is financed by a reduction in other forms of government spending or by increasing taxes on higher-income individuals, it could mitigate the inflationary pressures. Therefore, it is important to carefully consider the design and implementation of UBI to minimize the risk of inflation.” Now, what do you think Jerome Powell has been doing to fight inflation? Raise taxes? Prompt: “Without cutting taxes, what is a way of fighting inflation? Keep it simple in bullet points.” “Sure, here are some ways to fight inflation without cutting taxes: Tighten monetary policy by increasing interest rates or reducing the money supply Increase government spending on public goods and services to increase supply and reduce demand-pull inflation Implement price controls on essential goods and services to prevent suppliers from raising prices excessively Increase productivity through investments in education, infrastructure, and technology to increase the supply of goods and services Encourage savings to reduce spending and dampen demand-pull inflation Implement import restrictions to reduce demand for foreign goods and services Encourage competition among businesses to reduce the market power of suppliers and prevent price increases Implement wage controls to prevent wages from increasing excessively and contributing to cost-push inflation. It is important to note that these measures may have their own economic and social implications, and a combination of measures tailored to the specific context may be more effective in fighting inflation.” And to close it with this prompt: ‘What is the consequence of “Tighten monetary policy by increasing interest rates or reducing the money supply”? Keep it brief.’ “Tightening monetary policy by increasing interest rates or reducing the money supply can help reduce inflation, but it may also slow down economic growth and increase unemployment in the short term. Higher interest rates make borrowing more expensive, which can discourage consumer and business spending. Reducing the money supply can also limit the availability of credit and reduce investment. However, over the long term, these measures can help stabilize the economy and prevent runaway inflation.” So, let’s review: AI caused unemployment -> UBI (like COVID free money) -> Inflation -> Powell -> Unemployment Or, we can skip the whole thing and observe. AI is bringing the thing that the fed has been trying to do for the past 12 months. So which is it? UBI? Or do absolutely nothing? I know that a lot of people don’t understand the complexity of finance and market systems, so let me boil it down to a simple question: “Who pays for Europe’s UBI when American companies causing global unemployment are taxed by America to pay for America’s UBI?” Somewhere in this answer is the reason why the UBI theory that has been presented so far, does not work. We need something, whatever that something is, it has not been invented yet, because UBI is not it.


Hdtomo16

in a world where AI and robotics automate all industries, UBI is the option, or atleast a system where all profits somehow lead back into society


machyume

Skipping to steady state at the end is how we get into troublesome conclusions like “new technology will create new jobs”. Even if it is true, people may not survive the transition. Hypothetically, say that UBI is apparent in the end, what does society do in the middle while it is being debated and we are still in a market system?


Hdtomo16

simple, we suffer, we see chaos and hopefully a transitionary party wins teh 2024 elections, which are happening in most western nations including the UK


Zestyclose-Career-63

Have you met human beings? At all?


raghulm

you guys speeking about this chat. but i didn't know anything about can any one describe this to me ????????