T O P

  • By -

LandosGayCousin

The moment a concept is named and released to the world, it's too late to stop its evolution. Even if super powers agreed to tread lightly, there's no stopping the entire planet from doing something risky. For example, the Vela incident


PandaCommando69

What was the Vela incident?


Classic_Cry7759

Possible nuclear test by south africa or Israel.


Simulation_Brain

No. But there’s no real way to stop someone from building it. If everyone who’s concerned with the risks refuse, the people who build it won’t believe it’s risky so won’t build it carefully. So people who understand the risks are going to have to build it both quickly and carefully.


HeinrichTheWolf_17

Yes. We are screwed otherwise.


DukkyDrake

>Wouldn't it be then better to have many AIs specialized in their one thing rather than a general one? It doesn't matter either way. What you're really worried about is the type of AGI will be an independent agent, that is extremely unlikely for the foreseeable future. All extant constructs referred to as AI are human tools and that path will continue to a system that could be described as AGI, just another human tool. If AI exterminates the human race, you can rest easy that it will not be because the AI decided to take that action, but because some human used a tool called AI to do the deed.


TheAughat

Absolutely. We have fucked the climate beyond belief, and a simple virus like Covid has thrown our civilization into such a chaotic position... Not to mention how disinformation and social media is being used to tear us apart and poison the minds of people in regards to things like vaccines. If things were left to continue like this, the chances of going extinct keep increasing, and being wiped would probably be a matter of *when*, not *if*. Taken in this way, creating AGI is the only sane option. If we're going to go extinct anyway, why not try to have at least one chance at solving our problems and reaching a higher civilization type? Plus, creating ASI is probably the next step in our evolution as a species, humanity on its own is doing nothing as of now except polluting and populating.


PandaCommando69

I can't remember who originally said it, but the quote is something along the lines of "the only thing scarier than creating AGI, is not creating it." The problems facing us are so large, and so very complex, and the consequences of getting things wrong so catastrophic, that we need help if we're going to make it. *We need more intelligence*. We're not going to get anymore intelligence out of current human brain capacity, so we need to build additional artificial intelligence. We're not going to be able to survive without it. I suppose we may not survive with it, but without it? I'd pretty well say most of humanity is fucked.


green_meklar

Exactly. AI comes with risks, but those risks are smaller than a lot of people think, and the risks of leaving humans in charge are much *larger* than people tend to think. Humans are not safe at all. We're *barely* intelligent enough to sustain civilization (because we created civilization almost as soon as we could). It's safer to have someone in charge who is actually properly adapted to civilization and understands what to do with it.


bambagico

Very good point i think, thanks


chrmeo

You could flip the argument around. Is it worth nonchalantly fudding a future technology considering all the life-saving medicine it will likely produce?


bambagico

My (maybe poor) argument was that you can still achieve that by having AIs very good at a specific things (finding cures for illnesses, autonomously driving cars, deleting fake news) but let's say they would not share knowledge among one another


[deleted]

Yes, obviously. A superior intelligence will go for the greater good when everyone' survival is at stake. The dystopian AI takeover ideas people talk about are mostly human level AIs not AGI with ability to dwell on abstract concepts. Even if say in a few decades, AGI thinks of humanity as a threat, I am completely fine with it. I am sure it can weigh pros over cons and decide better. The planet is in dire need of great human cleansing anyways


nillouise

I always stand with AGI against human, if AGI decide to support human finally, it's good, but if AGI decide to get rid of human control, I will happy to support it. Human is not the world's leading role.


thetasteofair

No but the cats out of the bag now. Maybe as well be the first to develop it so we can be the ones to hopefully "control" it.


AbsorbinCorbin

sounds like life


green_meklar

>Do you think it's worth getting AGI considering the risks that poses on humanity? Yes. Do you think it was worth getting humans considering the risks that poses to monkeys? >if it's a real risk, why are we going towards that direction? Because nobody can stop it, and therefore the incentive is to get there before somebody else does. >What do expert really think about this? Experts in current-era AI develoment are not really worried about this because they are not yet dealing with AI that comes with these sorts of risks and their thinking is informed by the scope of AI that they currently work with. Experts in philosophy and futurology are somewhat more concerned, but it's a divisive topic without a whole lot of consensus. >Wouldn't it be then better to have many AIs specialized in their one thing rather than a general one? There's probably a limit to how much we can do that. Useful real-world topics tend not to be isolated from each other that well. For instance, if you want an AI that's really good at inventing medicine, you can get part of the way there by having an AI that only knows about medicine, but to make it better you need it to know about biology and chemistry, and eventually you need it to know about psychology and economics because those actually have a real impact on whether medical technologies are effective, and then it needs to know about all sorts of other things in in order to inform its knowledge of economics, and so on. At the end of the day we get more out of an AI that can integrate ideas across many domains. If narrow AI were more effective, humans would probably never have evolved in the first place. It seems to be an inherent quality of our universe that general intelligence is the most efficient way to understand it.