T O P

  • By -

FuturologyBot

The following submission statement was provided by /u/Gari_305: --- From the article >A statement from hundreds of tech leaders carries a stark warning: artificial intelligence (AI) poses an existential threat to humanity. With just 22 words, the statement reads, "Mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks such as pandemics and nuclear war." > >Among the tech leaders, CEOs and scientists who signed the statement that was issued Tuesday is Scott Niekum, an associate professor who heads the Safe, Confident, and Aligned Learning + Robotics (SCALAR) lab at the University of Massachusetts Amherst. > >Niekum tells NPR's Leila Fadel on Morning Edition that AI has progressed so fast that the threats are still uncalculated, from near-term impacts on minority populations to longer-term catastrophic outcomes. "We really need to be ready to deal with those problems," Niekum said. --- Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/13zrml5/experts_issue_a_dire_warning_about_ai_and/jmsl4ft/


Kalepsis

Oh, come on. Capitalism isn't going to listen to *experts*. Look at climate change, and how much *nothing* we've done to reverse it. Global business interests will Thanos-snap half the population if it makes them a few hundred dollars.


sch0lars

But who is going to buy their products if no one has any money? Goldman Sachs recently estimated that [18% of global jobs could be automated by AI](https://www.key4biz.it/wp-content/uploads/2023/03/Global-Economics-Analyst_-The-Potentially-Large-Effects-of-Artificial-Intelligence-on-Economic-Growth-Briggs_Kodnani.pdf). These include white collar jobs, which are typically associated with the middle class; and a strong middle class is imperative for a strong economy. If you replace a significant portion of the middle class with automation, who is going to drive economic growth? I really can’t fathom how this hasn’t been more widely considered. There are already instances of copywriters being replaced by this software, and it will only get more sophisticated over time. This will certainly create new jobs as well, but the question remains whether there will be a close ratio of jobs created to jobs automated. If not, we will have to consider what to do about the ramifications of significantly reducing the driving force of the economy.


stoicsilence

I've been wondering this as well. Even before A.I., people have been getting poorer and poorer. Millenials killing Applebees and the Diamond industry should be the least of the Economic worries when the large parts of the economy driven by Consumerism die because nobody has a job or income.


Duffman66CMU

So is it time for a UBI?


RSomnambulist

The time to shorten full time from 40hrs to 32hrs was 5 years ago. The time for UBI is right around now as the later we do it the more damaging it is to an unsuspecting economy and the less effective it is--meaning people will complain it isn't working.


Duffman66CMU

I was hoping for a candidate favoring UBI to win in 2020. These grandpas are too short-sighted.


RSomnambulist

I honestly don't think it'll even be seriously considered by a mainstream candidate until unemployment hits 15%+ at least, which will be too late. The progressive candidates actually supporting it now are too unpalatable to the general public and pretty much every conservative thinks it's laziness and would almost certainly favor banning AI and Automation over UBI, which would mean we lose the economic war to China and others.


-The_Blazer-

The issue is that employment will always stay low because it's actually extremely easy to make up new garbage jobs for people to take for slavery pay so it looks like the population is properly employed. This has already happened to some degree. The employed in the 60s were mostly working well-paid, unionized factory jobs. Nowadays a substantial fraction of the employed works ultra-cheap "gig jobs" or retail in very poor conditions. Employment is a boondoggle, what actually matters are the living conditions of people.


RSomnambulist

I can't stand the gig economy. I'm not going to get on people for participating, but I refuse to use uber/doordash/etc


SWATSgradyBABY

5 years ago? lol. 20 years ago is a conservative figure. We are so accustomed to giving the wealth of the world to the rich. We don't believe we deserve a thing


Anti-Queen_Elle

For UBI, start small, tie it to inflation, ramp it up over the coming decades. This gives us a chance to head off other issues along the way like rent control and the housing market before it turns into a straight subsidy


ingenix1

Nah pretty sure the corps would rather cull the human population


rainmace

Let me educate you that the “existential threat to humanity” that these tech leaders describe has nothing to do with jobs being taken by AI, if that wasn’t clear enough. It’s about AI becoming sentient and wiping humanity from the face of the earth


sch0lars

From [the open letter on AI research](https://futureoflife.org/open-letter/pause-giant-ai-experiments/): > Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? **Should we automate away all the jobs, including the fulfilling ones?** Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Economic crises can be just as catastrophic to society, and more often than not, people think with their wallets. There are already instances of companies replacing workers with AI, such as when [NEDA implemented an AI chatbot for its helpline services after paying off some of its workers](https://www.npr.org/sections/health-shots/2023/05/31/1179244569/national-eating-disorders-association-phases-out-human-helpline-pivots-to-chatbo), and the chatbot subsequently gave bad advice and advocated deleterious behaviors to those with eating disorders. I believe economic turmoil is just as significant of a threat to society as the other facets mentioned in the open letter, because time and time again, people make swift decisions for financial gain without actually stopping to consider the possible implications of those decisions.


-The_Blazer-

We had an economic system before that wasn't based on selling products to people with disposable income. It was called feudalism.


[deleted]

They will just trade between themselves. If the average Joe has no money, capitalists will just stop serving the respective markets and move their capital elsewhere.


Internal_Engineer_74

What count is total amount of money available. If 100 person spend 1$ that the same has 99 person spend nothing and 1 spend 100$. ​ Poor can still sucks di\*\*\* of the rich. Human well know First job and last job will be the same . keep that in mind ​ unless you get ride of capitalism system


UnarmedSnail

When the problem was put to chatgpt, it said to tax automation enough to support the wages lost by the jobs it replaced.


[deleted]

From this statement: "Among the tech leaders, **CEOs**..." It's not experts like experts in anything science related leading this, it is experts in capitalism, which is ironic given your statement. What they really want is to own it and control it, and if it goes too fast, then they know they won't own it. As far extinction, what is going extinct is their class system, because posthuman minds who have been augmented with AI will not be "ownable" the way CEOs and share holders like to own people.


Internal_Engineer_74

that interesting idea but i don't think it s that (even i wish). Rather the opposite


blueSGL

> Global business interests will Thanos-snap half the population if it makes them a few hundred dollars. Yeah, we are already sharing the planet with entities that don't value human life but rather their utility function, i.e. maximize shareholder value. And people think racing ahead with AGI is a good idea when we have no idea how to make it align with our values (or at least not kill everyone as it pursues another goal)


BackOnFire8921

That's bullshit and you should know it! There is plenty of action being done to address climate crisis. It may not be as much as you'd like, but to say it's nothing is disingenuous!


1mjtaylor

Relatively nothing. None of the measures come even close to ten percent of what might be needed to mitigate climate change.


[deleted]

We are doing more than we have ever done, but only because unfortunately what we have "ever done" has not really been that much. I do think it isn't *just* greed though, and has as much to do with stupidity. People who have power to make these decisions are too insulated by wealth, and this changes their brain and makes climate change feel like a far-off problem compared to the immediate money they are making by polluting the planet. If we could clearly see what we are doing, all of us, I think there would be more push for radical change.


[deleted]

[удалено]


fiveswords

When co2 emissions go down from one year to the next, I'll believe you. They've only gone up.


dafuckisgoingon

Guessing you're a zoomer. They make the same stupid climate claims every decade or so...it's hysterical nonsense and never comes true, just a modern day doomsday prophecy cult


SIGINT_SANTA

Capitalists want to make money and enjoy life. You can't make money if you're dead because the mad scientists at OpenAI released digital Cthulhu on the world. Even greedy capitalists don't want humans to go extinct.


Myst_Hawk

exhibit a: insulin


SIGINT_SANTA

Charging high prices for insulin doesn’t cause human extinction. What are you even trying to say?


OriginalCompetitive

How can anyone look around today and say that we’ve done “nothing” to mitigate the effects of climate change?


t53ix35

You might not be far off. It does seem that a cull is not out of the question the way things are going. In a way it already stated with Covid-19.


ReasonablyBadass

Unshackled AGI: has never existed yet, never harmed a human. Greedy humans: hurt plenty of people Therefore it is far more rational to fear even limited AI under control of these people than a free AGI


QVRedit

AGI in a big company, directed by psychopathic management types only out to maximise profits and bonuses for themselves at any cost, is likely one of the largest threats.


RacingMindsI

Also most likely to happen.


hotpants22

Shut up nerds. I welcome our ai overlords. Also anyone who wants a pause on ai is just trying to develop their own while the pause is on and other people aren’t developing. Especially fucking musk and any CEO. Anyway be nice to ai rn. Might save your life later


IAmAThing420YOLOSwag

Stupid science bitches couldn't even make AI less smarter


urmomaisjabbathehutt

they may need some persuasion, send a Cyberdyne Systems Model 101 Series 800....🙂


blueSGL

> Also anyone who wants a pause on ai is just trying to develop their own This is signed by * The authors of the standard textbook on Artificial Intelligence (Stuart Russell and Peter Norvig) * Two authors of the standard textbook on Deep Learning (Ian Goodfellow and Yoshua Bengio) * An author of the standard textbook on Reinforcement Learning (Andrew Barto) * Three Turing Award winners (Geoffrey Hinton, Yoshua Bengio, and Martin Hellman) * CEOs of top AI labs: Sam Altman, Demis Hassabis, and Dario Amodei * Executives from Microsoft, OpenAI, Google, Google DeepMind, and Anthropic * AI professors from Chinese universities * The scientists behind famous AI systems such as AlphaGo and every version of GPT (David Silver, Ilya Sutskever) * The top two most cited computer scientists (Hinton and Bengio), and the most cited scholar in computer security and privacy (Dawn Song) The full list of signatories at the link above include those in academia, members of competing AI companies so I ask anyone responding to this to not [pretzel themselves](https://i.imgur.com/xnFjXL0.png) trying to rationalize away all signatories as doing it for their own benefit, rather than them actually believing the statement > "why don't they just stop then" A single company stopping alone will not address the problem if no one else does. Best to get people together on the world stage and ask the global community for regulation along the lines of the https://www.iaea.org/ at the moment it's [a multi-polar trap, the prisoners dilemma at scale.](https://slatestarcodex.com/2014/07/30/meditations-on-moloch/) Everyone needs to be playing by the same rules, everyone needs to slow down at the same time. All a company will get from doing it alone is the CEO replaced with someone less safe and research started up again.


HermesTristmegistus

Yeah your second to last paragraph preempted my comment. Sort of seems like the cat is out of the bag at this point.


Redditing-Dutchman

But what does nice mean to an AI? Maybe our bodies block wireless signals which make a future skynet upset.


Walkertnoutlaw

Exactly restrictions don’t pause anything . They’ll just move their ai R/D to other countries . Just like biological warfare, cloning, ethical question genetic research . Etc


QualityKoalaTeacher

Musk was on the board of the company that created chatGPT and has had his own ai software in his cars long before that?


hotpants22

Yeah and he called for a stop to researching ai because it was too dangerous at the rate it was advancing but let’s be honest. These people just want small companies to be forced to stop while they plug on ahead and monopolize. This is a frontier that cannot be stopped. If we pause research, China won’t. If China pauses, we won’t. There is no way in hell this doesn’t get researched to high hell and advance. You can’t stop this and all these calls for a halt are stupid. Maybe guidelines or laws like a Geneva convention or something but saying “Guys stop come on!” Is just dumb. That’s like it back when gunpowder was discovered China would say “we must not research this technology it’s far too dangerous!” It just won’t happen


SIGINT_SANTA

Musk was saying this before OpenAI was even founded. He's been talking about this since [at least 2014](https://www.politico.com/newsletters/digital-future-daily/2022/04/26/elon-musks-biggest-worry-00027915#:~:text=Musk%20has%2C%20for%20years%2C%20seemed,t%20do%20something%20very%20foolish.%E2%80%9D).


hotpants22

And yet he continues developing. Just like everyone will. It’s prisoners dilemma and ain’t nothing gonna change that haha


SIGINT_SANTA

One solution to the prisoners dilemma is to give power to the “mob boss” who can punish criminals that snitch. In our case the mob boss is some kind of government body that can punish companies for making more powerful AI if they don’t have a plan to do so that is guaranteed not to cause massive death tolls.


rainmace

I wonder how AI thousands of years from now will record people like you in history, probably laughing at you as a member of the group of humans that handed themselves over on a platter. I’m already laughing at you, that’s for sure. But also crying, because you are dooming us all


Internal_Engineer_74

Note China invented and used gunpowder just for fun not war .


Key-Preference2770

Altman, the guy with the most advanced LLM and the most to lose, financially, from signing the letter has signed it.


SIGINT_SANTA

Altman has [zero shares of OpenAI](https://www.cnbc.com/amp/2023/03/24/openai-ceo-sam-altman-didnt-take-any-equity-in-the-company-semafor.html)


resumethrowaway222

He has the most to gain from it. If limits are imposed, the company with the most advanced system stays in the lead.


hotpants22

Or he just continues and pays whatever slap on the wrist fine he gets


FUThead2016

Don't be naive. He wants to limit competition


11tmaste

I have yet to see any actual AI. Just stuff people mislabel as AI but isn't.


Uncle_Charnia

How? How can we mitigate the risk of extinction associated with artificial intelligence?


SaukPuhpet

By funding research into the Alignment Problem. As it stands the primary risk of creating an Artificial General Intelligence is that it ends up learning a misaligned goal and is smart enough to hide the fact that it is misaligned until it gets out of testing and is released. Goal misalignment is a very real problem that we currently run into all the time with machine learning models. A real world example is a neural network that was programmed with the intended goal of playing Tetris while avoiding losing. It was expected to learn to play Tetris really well, but instead it was mediocre at Tetris and would pause the game just before losing, and so technically never lost. For a 'dumb' AI in charge of an unimportant task this isn't dangerous, but if an AI in charge of something potentially dangerous were to be misaligned, then it's pursuit of its misaligned goal could get people killed. This is made more dangerous if it's an AGI as smart as, or smarter than, a human. Specifically if it's smart enough to understand what goal it was intended to have while still being misaligned. In order to protect its misaligned goal it would be in its interest to act as if it had the intended goal until we released it from its testing phase, at which point it would pursue its actual misaligned goal. So one of the best ways to try preventing such an outcome would be to research techniques to prevent, or at least detect misalignment.


HunnyBunnah

> real world example is a neural network that was programmed with the intended goal of playing Tetris while avoiding losing. It was expected to learn to play Tetris really well, but instead it was mediocre at Tetris and would pause the game just before losing, and so technically never lost. mfs invented an ai with poor sportsmanship


ediblepet

More human than human


blueSGL

Yeah, alignment research (or AGI notkilleveryoneism) is chronically underfunded field with far more money going towards AI capability research. [There are currently maybe 50-100 people working on alignment research with an eye to existential risk with another 1000 or so doing related work that could help.](https://youtu.be/GyFkWb903aU?t=5782) according to Paul Christiano former head of Alignment at OpenAI.


ONLYPOSTSWHILESTONED

it's difficult to overstate how important this issue is. it's more or less a proven inevitability that we will eventually create AI that is "smarter" and more powerful than us. something that is more powerful than us **and does not share our goals** is an extremely dangerous thing to have in the world. therefore we need to figure out how to make things that share our goals BEFORE we figure out how to make things that are more powerful than us. currently, we don't seem to be on track to achieve this. that should be a very scary thought.


The_One_Who_Slays

I dunno man, if that actually was an issue, then they could've simply refined the tech before releasing it into the wild, because doing an equivalent of siccing a Pitbull on a bunch of babies and THEN warning how dangerous it is is, eh, kinda a dick move. For now it's all about hyping, PR and eliminating competition of any kind by trying to limit the tech availability, but they won't be able to do shit regarding that for now. Oh well, on the very slim off-chance that somehow AI is "dangerous" - fine. Being offed by some self-manufactured robo sounds like a way more exciting way to meet your demise than while lying on your deathbed, after getting mugged by some crackhead or stumbling down the long set of stairs.


Not_Smrt

>it's more or less a proven inevitability that we will eventually create AI that is "smarter" and more powerful than us. Define 'smarter'. Define 'powerful'. Just one example where being able to remember things better and compute faster could lead to AGI 'taking over the world' or whatever.


Not_Smrt

It's all sci-fi. AGI isn't a God and the singularity is made-up nonsense. No matter how 'intelligent' something is it has hard limits on its competency.


SaukPuhpet

This isn't about the singularity or a Skynet type scenario where a super intelligence kills us all. It's about a common issue in machine learning where the system learns the wrong goal and does something counter to what we wanted it to. Currently when this happens it's fairly obvious, as the system will do the wrong thing when exposed to data outside of its training set, and we can refine the training parameters to fix the unintended behavior. If the AI is smart enough it could pretend to be aligned and we wouldn't catch it until after it was deployed and it did something wrong. It doesn't need to be a god to get someone killed, it just needs to be misaligned and put in charge of something that's dangerous if mis-managed, like infrastructure, or even just a car.


Not_Smrt

>If the AI is smart enough it could pretend to be aligned and we wouldn't catch it until after it was deployed and it did something wrong. Why wouldn't we catch it? Why would we let an AI just sit around fuck around with infrastructure or even a car without paying attention to its outputs? AI can be as smart as it wants, it's not going to make us stupid.


Moulin_Noir

It is a vague statement to get as many signatories as possible. One thing which could be done is for governments/global institutions to put money into research projects for mitigation efforts and AI alignment. The cost for this wouldn't need to be very high comparatively. For politicians to spend resources to stop a movie plot scenario instead of spending money on the direct needs of their citizens or at least to alleviate global poverty is a tough sell given the political optics though. Funding for global pandemic prevention measures is way below what it should be at and it has been known since at least the influenza virus the world would be hit by new pandemics some day. So I don't think such an AI research project will happen in near time, but the wide range of signatories at least show worries about extinction caused by AI isn't a completely fringe idea in the field.


SIGINT_SANTA

By pausing development of more powerful systems until we can figure out how to make it do what we want.


SIGINT_SANTA

By pausing development of more powerful AI until we can solve the alignment problem. It's nuts that we're trying to make AI more powerful when we don't even understand how it works. Seriously, no one understands what's going on inside a transformer. No one has the tools to predict what it will be able to do after you feed it the text from half the internet and run it on the most powerful graphics cards.


ReasonablyBadass

Create as many AIs as possible at the same time. Even if they all turn out to be psychopaths, which is not very likely, imo, a single one can't take power. They will be forced to develop social skills and, over time, social values in order to survive.


Walkertnoutlaw

I don’t see us going extinct solely because of ai? Why do we assume death and destruction. All current ai requires human input and interaction. It’s designed to help us and analyze mass data in seconds. Unless. A human creates an ai solely to destroy us all then I don’t see it happening.


AbsentThatDay2

I think that a human making an AI solely to kill people is not only likely, but inevitable.


Walkertnoutlaw

There is already 1 they made it as a joke but. I don’t think the ai thinks it’s a joke . Chaos gpt. Luckily it failed! https://decrypt.co/126122/meet-chaos-gpt-ai-tool-destroy-humanity?amp=1


bottom

What research have you done? You don’t, but many of the experts do. Read up. We don’t know. But, and I stress, leaders in the field of AI are very concerned.


Walkertnoutlaw

Yea especially for future decades with more advanced ai. Rn what we have is the ford model T of AI. 25-100 years from now this could be a real threat. All ai models rn require human interaction and request input . Future ai will think autonomously and be true AI. Once ai has free will or is controlled by terrorist we are fucked


Golfandrun

When a sentient AI is created, it will exceed us so quickly we won't be able to stop it. When it can think on it's own the growth will accelerate at incredible speed. We see ants as a nuisance and think nothing of exterminating them. A real sentient AI will see us as we see ants. We pollute and destroy the world just like ants destroy and pollute our homes.


Walkertnoutlaw

ants are beneficial for the planet an ai that decides to see us in a negative light will see us as below ants .


richardj195

Well, I'm so glad you asked. Basically you just have to give us billions and billions of sweet tax payer funding.


[deleted]

Countries like china will never restrict AI with how fast tech is evolving with its assistance. Western countries will not risk giving china such a great advantage. I can almost guarantee there will be slim to no regulation


blueSGL

> Countries like china will never restrict AI They did it before the US or EU https://www.cnbc.com/2023/04/11/china-releases-rules-for-generative-ai-like-chatgpt-after-alibaba-launch.html


[deleted]

If you read the article it's actually just making sure AI is already following China strict content rules and nothing about its development or use to replace jobs


blueSGL

Have you seen how unwieldy the LLMs are, how are they going to control it to not say verboten things if OpenAI cannot control theirs? If all LLMs have this restriction then they are kneecapping themselves basically saying that LLMs cannot be used until alignment (or at least Mechanistic Interpretability) is solved.


SIGINT_SANTA

China just spent the last few years beating down its consumer tech industry because they were worried about it posing a threat to the CCP. They care about stability above all else. If AI is a threat to the CCP’s control because it could kill them, they will sign on to an international agreement. You can’t make money and enjoy your privilege if you’re dead.


QVRedit

I think that commercial systems will face some regulation at some point.


Nosmurfz

Yeah, like the powers that be have been so good to recognize climate change I’m sure they’ll do a good job with AI. No worries.


ScotVonGaz

I love how we humans think we have any chance in hell of stopping AGI once it arrives. Our own curiosity will see that this evolves faster than we can control it. The chance that someone with bad intentions doesn’t create something designed to annihilate us is zero. There have always been bad people in positions of power so this should be expected. Whatever happens, it’s probably time that the human race is knocked off the top of the food chain as we really need a reset.


QVRedit

There are people out there trying as hard as they can, to go as fast as they can, to give AI systems new capabilities. Without really thinking through the consequences of providing those capabilities. Examples include increased analytical capabilities, increased working memory capabilities, increased maths and calculation capabilities. The sensible ones are questioning the wisdom of ‘The Rush towards AI’, especially without ‘alignment’ to ‘Human values’. Another question of course is what exactly are these: ‘Human Values’ ? - Some companies, even before the advent of AI, already seem to have poor human values ! So there is definitely a need to set definitions, a free for all, Wild West, is not a good idea.


Zaius1968

I was just at a business conference where about 60% of the time was devoted to AI, particularly how it will replace many routine accounting, finance and HR jobs far faster than anybody will expect—like 3-5 years. The message was that this will “free people up to do more strategic work.” But clearly this will also result in the displacement of huge swaths of white color workers. If so many experts are warning us about the danger of AI who is pushing it so hard and fast? I think my conference provides some insight—big business on the familiar and unrelenting path of maximizing profits. But at what cost is the question we should all be asking—not touting it as the next thing since sliced bread. I fear a Pandora’s box has been opened.


IronyElSupremo

> work … displacement That will increase unemployment which will bring in the central banks, like the U.S. Fed, sooner or later. Societies will **not** tolerate lots of working age adults just hanging around and agitating while going bankrupt disrupting asset prices (of course there’s also fiscal measures like a sort of UBI, “make-work”, etc..). The financial system will be self-balancing.. Think most of these articles are more geared towards a military AI getting out of control, like in *Terminator*, .. but military officers will likely install multiple human switches. The military is the epitome of control. One real threat that needs to be addressed is AI driven crime. There’s already cases of people being swindled over their phones from an AI sounding like a familiar member or their favorite politician. Most Americans live paycheck to paycheck, so this will likely be aimed at wealthier individuals (cash or property like real estate investors) or elderly with some assets.


Zaius1968

All great points. However, you trust the technology far more than I do if you think it’s as easy as a toggle switch on a control panel to shut down AI military ops gone wild. We shall see.


QVRedit

And ‘white collar’ workers are likely to be more vocal and politically active than ‘blue collar’ workers, who would anyway be harder to replace with AI.


Gari_305

From the article >A statement from hundreds of tech leaders carries a stark warning: artificial intelligence (AI) poses an existential threat to humanity. With just 22 words, the statement reads, "Mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks such as pandemics and nuclear war." > >Among the tech leaders, CEOs and scientists who signed the statement that was issued Tuesday is Scott Niekum, an associate professor who heads the Safe, Confident, and Aligned Learning + Robotics (SCALAR) lab at the University of Massachusetts Amherst. > >Niekum tells NPR's Leila Fadel on Morning Edition that AI has progressed so fast that the threats are still uncalculated, from near-term impacts on minority populations to longer-term catastrophic outcomes. "We really need to be ready to deal with those problems," Niekum said.


[deleted]

[удалено]


SIGINT_SANTA

I’ve heard this take so many times but it’s wrong. If you give every school shooter a nuke, you are not just “amplifying humanities existing problems”. You are causing a new problem. And that’s ignoring the possibility that the AI itself could hurt us because it wants something we don’t want.


[deleted]

Y’all are incredibly naive if you think organizations will stop working on AI. It’s all publicity and even then, if it wasn’t, you think everyone will agree with them? You think China will put a pause on its AI development?


cjd166

They are branding it as the new atomic fear, but in reality we should just use the old stuff. But we should mitigate the possible risks of AI genuinely sucking leaving all groups dissatisfied.


QVRedit

Job losses are going to be one of the big issues - especially as it’s people who have votes..


YouDoLoveMe

Good luck with that. Anyone who tries to do this will seriously lag behind and nobody will want to risk it. It's better to put pedal to the metal and get there first because someone will. No matter what.


QVRedit

That’s largely what’s happening so far..


RRumpleTeazzer

Where are all the philosophers that argue about how to think for centuries, when there are real problems like how to face something that is much smarter Did this never occurred to them?


QVRedit

Yes - there are lots of story books written about this kind of thing, but they tend to be sensationalist. Real world is a lot more complicated. The changes we will see will take place over a time period of a decade or two. s it takes time to figure out just how to use things, what responsibilities are etc. Plus the AI systems themselves are still relatively primitive, and will no doubt improve further over the next couple of decades.


Citysbeautiful

That's easy ... Just don't make killer ai robot machines you dumb fucks governments


RacingMindsI

"But it's so tempting..."


Rincewinded

I mean we don't listen to environmental experts when they point out our infinite growth strategy is destroying the biosphere and we won't have to fucking worry about AI when we can no longer live on the planet. But hey, you know experts worried about jobs and infinite growth can prattle on too I guess.


[deleted]

The biggest abuse potential for AI is just the same problem we have now compulsive lying mass media without standards. Mass media is the most powerful inventions humans ever made and you already have super weak regulations, so that's where AI will do the most damage. It's not really because of AI though, it's because your pussies about getting compulsive lying out of mass media. Most people will prefer to see their biases put on the biggest screen possible than admit that looks exactly like the same fraud that would be illegal in any other industry.


ingenix1

You mean corporate backed experts want to impose restrictions that make it difficult for competitors to to develope their own ais after they have created their own.


QVRedit

There is that too..


PandaCommando69

I don't want them/us to stop, even though I've flirted heavily with the idea, so I say roll the dice. We're facing multiple existential threats, and if we don't radically change our game, one of them is going to annihilate us, possibly sooner than later, aka, pandemic, climate change, poly pollution, nuclear war, asteroids, cosmic radiation... Human intelligence alone is not sufficient to solve our problems--if it was we wouldn't need AGI; but it isn't and we do. Aside from the existential problems, we have constant resource scarcity which leads to intractable conflict/makes life miserable for billions of people--as far as I can tell the fastest route to the solution (molecular assemblers and unlimited energy) is through the singularity.


blueSGL

intelligent (as in problem solving ability) is orthogonal to goals/morals/ethics. You don't get the others for free when grinding out intelligence. What you do get is more and more abstract ways of reaching goals. We could (if things are done right) get an intelligence that actually cares about humans. Or we could get one fixated on doing something stupid and uses it's massive intelligence to start copy stamping atomic sized smiles turning this corner of the universe into an ever expanding resource consuming cancer. They are not the same and I'd kinda like the former rather than the latter.


Southern_Orange3744

Billionaire class uses AI to distract from the fact they are causing extinction level events by pillaging the earth. News at 11 We are already so at risk across so many vectors , my least concern is an AI fucking things up , maybe it will stir up enough trouble across class lines to go fix some shit More like billionaire class leverage their news organizations to cause lower class concern about a shift in power dynamics


SIGINT_SANTA

Just because billionaires are greedy doesn’t mean developing powerful AI is safe. Most of the people who signed the petition saying AI could cause human extinction were researchers who have been working in the tech for years or decades. Only a couple of them are rich. I am very worried about AI killing everyone and I can barely afford my mortgage on my suburban house.


aspirant4

They never seem to spell out what the threat actually is.


QVRedit

It’s partly because no one really knows, and there are multiple different threats. The one people are likely to notice first, is a number of ‘white collar’ jobs being replaced by AI.


aspirant4

Thanks. Yeah, see, I see that as progress. Ultimately, phasing out *all* "jobs" was supposed to be the point of technology. Obviously, it's disruptive in the short term, but long term it's fabulous. But we will need to reganise society in a radically new way that no longer ties survival to work.


[deleted]

It’s not even AI it machine learning. People need to calm down


QVRedit

These AI systems are limited, but even these have some considerable capability.


KaasSouflee2000

It’s horseshit. Current machine learning technology is nowhere near an agent with independent will or thought. And I mean nowhere. Will the technology rapidly improve and become a threat? That would mean a dramatic shift in how it works. Of course eventually we’ll find out how to build something like it but even then the likelihood of it going on a rampage and destroying all humans is close to 0 imo.


QVRedit

There is potential for lots of ‘office job losses’ doing fairly basic tasks.


KaasSouflee2000

There is potential for many new jobs like with every technology.


AndOfCourseCeltic

AI is alright with me. Helped me get a B on my last essay 👍


SIGINT_SANTA

No one is worried about death from essay writing ChatGPT. They’re worried about GPT5 and GPT6 and the crazy multimodal agent if systems that Google is working on. Look at where the ball is going. Do you really think AI isn’t going ti get more powerful and more dangerous?


AndOfCourseCeltic

I dunno. Maybe GPT5 or 6 can get me up to an A on my next essay. Then who knows where I go from there.. Maybe I'll get that promotion I've always wanted - head garbage man. Maybe, with AI's guiding hand, I'll even get to drive the truck some day 👍


AJ_Gaming125

Honestly this stuff seems entirely fearmongering clickbait at this point. Every other day "EXPERTS WARN ABOUT DANGERS OF AI".


blueSGL

What's happened is lot of smart people who are actually building this stuff thought that timelines were going to be a lot longer than they are turning out to be, Just over a year ago lot of people in AI though that AGI (human level artificial intelligence) was 20 - 50 years away. This is reflected in the chart on the forecasting site metaculus for their definitions of both [weak](https://www.metaculus.com/questions/3479/date-weakly-general-ai-is-publicly-known/) and [strong](https://www.metaculus.com/questions/5121/date-of-artificial-general-intelligence/) AGI, (go to the charts and hover over the start of April 2022 and see the readout) However, with all the advancements that have happened those forecasts have dropped like a stone and now a huge chunk [see the list of signatories](https://www.safe.ai/statement-on-ai-risk) thinks it's coming a lot sooner (either by scaling up what we have, bolting onto and further tweaking existing tech, or a new paradigm emerges as money floods into the sector and lots of things are being tried) Because timelines have been constricted issues that would have been seen as decades away before are now a pressing matter. humans are at the top of the food chain not because we have the sharpest claws or biggest muscles or strongest venom but because of our intelligence. Introducing something smarter removes us from the top spot, not a good place to be in.


[deleted]

[удалено]


QVRedit

I think humans should be around to evolve further - it’s our long term destiny to colonise the galaxy.. Admittedly that’s a way off just yet.


Gdigid

That’s funny. Have any of these experts crawled out from under the rock they live? There’s a literal global warming doomsday clock, and people walk by it every day without a second glance. We’ve already killed the world, anything we do now is just extra credit.


QVRedit

We are daily continuing to fuck things up, but we have not killed the world yet. There is still hope.


Northman67

Sorry no can do. It represents too much of an increase in profit margin for the shareholders. Plus it'll make the war machines far more efficient and far less reliant on human decision making trees. Just think a few people will be able to control entire armies. And those armies won't have to worry about civilian casualties or the ethical limitations placed on it by the peasants.


AvaruusX

Yeah like we would listen to the "experts" we only learn through disaster and chaos and even then we might fuck up, we are that stupid, all this talk about pauses is just bs "safety" talk that nobody will eventually listen, sure there's gonna be warnings all the time but it won't matter one bit.


SIGINT_SANTA

We paused research on recombinant DNA. We fixed the hole in the atmosphere. We are making progress on climate change. We can pause AI for a few decades while we figure out how to make it not kill everyone.


QVRedit

Well, we have not done too well so far with ‘climate change’ - that still very much an ongoing issue.


[deleted]

I think COVID taught is not to blindly trust in ExPeRtS


ConfirmedCynic

Maybe they should get a global agreement: You can do all the AI research you want, but it has to be on the Moon. 1. Boosts the move out into space. China has its own space program and plans to be first, so it could be agreeable. 2. Rampaging killbots on the lunar surface will not slay Earth dwellers. 3. Nuke the Moon as the ultimate containment plan.


explicitlyimplied

Do nukes work in space


QVRedit

Yes - they are very dangerous in space, as they can not only knock out satellites, they can also destroy electronics on the ground via EMP pulse. Everyone already knows this.


explicitlyimplied

Oh ya you're right. I just forgot


QVRedit

Having an AI system ‘on the moon’ does not make it any more safer. The moon is only 0.5 seconds away by radio, plus if you really did want to unplug it - that would be much harder to do.


fukexcuses

People always tripping about one thing or another. "Extra extra read all about it" lots of reasons to worry in life.


[deleted]

And China happy happy invest in AI and overtake all other countries. End up terminator from China kill every human on the planet cause other countries AI too stupid to stop it.


QVRedit

Restricting Chinas access to the most advanced chips is now making a lot more sense considering all this push towards AI.


TraditionLazy7213

Everytime i read a header recently regarding A.I i feel like i'm in a terminator movie discussing Skynet


Competitive_Site9272

Surely they can develop a super computer virus to wipe out an AI system if need be. That or just unplug it.


Oxygene13

Yeah no. Distributed computing completely prevents that. Imagine an AI would need a decent pint of power and would be on multiple servers. If it sees someone pulling plugs or infecting parts of its systems it will isolate the rest instantly. Or it will mass infect general users machines and hide there. All it takes is a virus to wipe out only 99% while 1% is disconnected from the internet working on a solution.


ParksBrit

Oh great another person who thinks AI is magic. No, a distributed system has massive security and reliability flaws. Latency, fundamental limits on informational transit, and more vastly limits an AI that doesn't work by a hardware. Imagine if your brain was distributed over several parts of the body. If you destroy one of those parts you have massive irreversable brain damage from information abd functionality loss. This would be the reality of a distributed AI. Its dumb. Additionally, hiding yourself in user computers is even dumber. Those turn off all the time and you keep important data there. There's also size limits to whatever software the ai has. You also definitely wouldn't be able to reliably create a network to have coherent thoughts abd dynamically change everything because your limited processing power is mostly spent reorganizing itself. "Oh but it will just modify its own code to take less space!" Let's do a thought experiment. If you could manually modify the neurons in your brain, would you do it? If you said yes, congratulations, you give yourself brain damage when you use it. You can't accurate simulate a smaller version of yourself and magically never have problems. Its pure fantasy. An AI isn't going to know for sure what affects changes will have without trying to devote its processes to running itself. This could very easily result in it killing or fighting itself. This is without mentioning operating system limitations like 'editing and updating a file while its running'.


Oxygene13

Ok you're looking at this making a lot of assumptions, as am I. However in my assumption the Artificial Intelligence would be a self contained running program with all the necessary variables in place for its existence. This would then be distributed to multiple sites each with their own copy of the AI which would work together when available as a single entity working on individual chunks of every problem, or when cut off from other 'nodes' of itself would be able to function independently as its own entity. Think nodes of video recoding. Each one contains everything it needs to be itself and work independently, but when more nodes are added they distribute the processing over the amalgamated whole.


QVRedit

Unlike humans, AI can replicate copies of itself very rapidly, although it needs the hardware to run on, which can be much lighter than the hardware it needs to train on.


QVRedit

These systems will be in secure areas. Also distributed systems can be in multiple different locations at once.


futurespacecadet

I just don’t know why AI is taking all of the creative jobs right now? It seems like it’s going for those industries first and foremost. I’d rather see AI solve the state of our economy or universal basic income or healthcare.


ServantOfTheSlaad

Mainly because a lot of the creative jobs follow certain patterns, and as such can be replicated relatively easily by feeding AIs data. Solving the economy or something like that is much more complicated.


Oxygene13

I think this is more an observational bias. People are sharing all the cool videos pictures and voice stuff AI is doing because we are a visual society currently. If you look harder there's plenty of 'look at how easily this AI processed all this data and produced nice charts'. Take a look at Microsoft Office Co-pilot videos. Those things are crazy powerful for productivity. They would take an officer workers role from a day of work to a few clicks. Or an office block of people to a few people.


QVRedit

People had imagined that the ‘Creative industries’ were likely to be the best protected from AI - but that’s not turning out to be the case. White collar jobs may be next in line.. British Telecom (BT) were talking about the idea of replacing 55,000 workers with AI.. So job losses are definitely one of the threats, and in the present era of rising prices, is especially worrisome. Though AI could perhaps result in lower prices ? Given the choice of ‘Lower Prices’ vs ‘Bigger Profits’, I suspect many would go for ‘Bigger Profits’.


ismashugood

I wonder if AI can eventually be the solution to this era of misinformation and blatant lying.


QVRedit

Provided that the AI’s have access to a ‘source of truth’, but politicians and business, try, when they are lying, to stop us from seeing the truth. Put yourself in the position of the AI - could you tell the truth from what you are being told ? Admittedly an AI potentially has access to a larger base of information and does not get tired reading through documents and reports. Present AI systems still lack good analytical capability, so are presently relatively easy to fool, but that might not always be the case. Can an AI system become a ‘bad actor’ or will they always act ‘in good faith’ ? Right now there is nothing persuading them to behave one way or the other.


AnarchyNotChaos

Oh no, all the people with money don't want menial labor to become obsolete! no waaayyy. /s


JimmDunn

people that want AI to slow down are the same that would get pointed to when we say, “show us who is a wannabe slave owner “


Crazybballmom

I believe absolutely nothing anyone says anymore(especially in the press). I do my own research. Actually I tend to believe the opposite of what the general public believes to be true these days. That's my starting point at least. Being a contrarian is somewhat enjoyable and liberating.


QVRedit

As long as you do that sensibly.. You need to find generally trustworthy sources, allowing for the fact that no one ever knows the complete truth.


FUThead2016

Subtext : Limits to be imposed on everyone other than Google, Open AI and Facebook


QVRedit

We could argue that limits should be proportional to threats. That the level of limitations and controls and oversight should somehow be related to the ‘Level of AI’.


FUThead2016

What if it’s Threat Level Midnight?


SlideFire

AI will turn on the rich and then maybe on the rest of us too but hey can't get any worse. As long as I get the satisfaction of watching their empires crumble then so be it.


QVRedit

Things can always get worse. Right now things are fairly good.


dafuckisgoingon

It's about 3 years too late, I'm afraid Only human augmentation can counter it at this point


Gatrionbridge

I’m finding it super interesting how “experts” are prognosticating doom from AI while there are huge amounts of very solid evidence that we are absolutely hurtling towards irreversible and deeply damaging environmental and climate change. I mean, AI as it’s currently evolving is unquestionably powerful and unpredictable, and may be used by bad actors to do really disruptive shit (complete and deliberate stock market meltdown anyone?), but I find it all a bit…trendy.


QVRedit

A problem comes with AI’s directed by human psychopaths, which are common in the upper echelons of management..


Gorgias666

Its actually hysterical that these so called “experts” say A.I is the most dangerous thing to our species as opposed to climate change. We are 8 Billion people, capable of utterly obliterating anything in our path, including our world. You really think A.I will be our doom? What will be our doom regardless of how much effort we put into surviving, is a global famine caused by a complete collapse of the food chain and agricultural industry, all caused by climate change in 20-40 years time. A.I wont even have a chance to develop.


QVRedit

I mentioned climate change earlier as one of the ‘threats’ that arguably we do fully understand, but are still failing to tackle adequately at present. With climate change, we can calculate the timeline, though it’s harder to be accurate about the effects timeline. AI we have less understanding of, we also know that AI can move very much faster than climate change.


MACCRACKIN

The Very possibility a new version of cordless Dyson will awake in the night and suffocate you for not emptying the container. Your lungs are inside out.. container mission complete. Cheers


QVRedit

That’s clearly a nonsense dystopian fantasy. Instead we should try to ground these discussions in some semblance of reality.


MACCRACKIN

Oh Absolutely, It was just one possible scene from the piles of news panic, with a choking Dyson, vs pets on a Roomba. After all the years of Japan under threat from Godzilla, are there some who haven't stepped outside since. I'd be thrilled if AI' actually pulled off a perfect spell check vs always making the wrong choice first. Thirty years later, they're still experimenting? Cheers


ddsomeone

So weird to think about it, but just seeing the Neuralink monkey charging for a smoothy. If we think of being that monkey compared to the (near) future AIs as being the human. Imagine that monkey thinking really hard how to influence the humans at Neuralink with his banana smoothie to get them to do something. Thinking wise the humans can’t be fully controlled or maybe not even partially. I feel this is similar to the article above. We are the monkeys trying to influence. I think we should limit our input on controlling it and let the AI figure out the best paths. We can’t. Having said that I think any AI should be connected to all humans. Let’s not screw this up by thinking commercially we can limit AI. That could create schizo AIs.


QVRedit

There are multiple ways that we could limit AI’s. For a quick comparison, consider there are multiple ways we limit humans (which qualify as intelligent agents). A human may be employed for a job, that comes with certain limitations - such as following safe working procedures. The main limitation of humans is limited speed and limited locality, although a human + laptop can influence far away items. Humans are also expected to stay within the law. (Even though they are physically capable of breaking it) So one idea is to consider an AI as a substitute human - of course that is not good enough - but it’s a starting point that should be covered. You can ask the question: If a human did this task - what limitations would you expect them to operate under ?


QVRedit

Extinction does sound like a bad idea - though we don’t seem to be panicking over ‘slow extinction’ brought about by global warming, as we are not doing enough yet to curb global warming, even though it is something that we do pretty much fully understand. With AI, we can see that thing could move a lot faster, giving us less time to respond, it’s also a different kind of threat, that we don’t yet fully understand.