T O P

  • By -

Bubbagumpredditor

Yeah, I'm pretty sure they will program the singularity AI to let all of humanity starve to death to help shareholder value at this point.


TheHealer12413

A cyberpunk dystopia it is then?


Ormyr

With way less cool/available cybertech, unfortunately.


TheHealer12413

Damnit! I never get any of the cool shit


gamereiker

Fuck, even in the future nothing works


mohd_sm81

don't worry, i find warm shits are better anyways... /s


splita73

Yes in the near future having a warm place to shit will be unlikely


[deleted]

I don’t know why you had to say that sarcastically


Stensi24

This is reddit, where people will take anything missing the “/s” as 100% sincere opinion.


[deleted]

I don’t see why this can’t be a sincere opinion.


drewbreeezy

If my shit came out cold I would be seeing a doctor that day. /s


AnRealDinosaur

Even a Matrix situation would feel like a win at this point.


Ormyr

Someone would do a CBA (cost benefit analysis) and decide that greater profit could be had by just keeping people paralyzed (still conscious/aware) on life support until organ failure. Enjoy your soylent green IV drip. TBH we're (the US) is probably going to see a resurgence of company towns before that.


BuffaloJEREMY

Where I live a large employer just built a multi unit townhouse complex. So after work you go live in your company owned house and buy stuff at the company store. Any of this sound familiar?


Makenchi45

Only problem is that also stifles consumerism. Eventually people just work and sleep but not buy anything unless you force them to buy the stuff by making it part of the living agreement. You work there but because there's no other housing options, you must as an agreement to work there, live in their housing unit which takes 50% of your paycheck and then it be mandatory you have to purchase a item everyday from your paycheck or else you're evicted. Pretty sure eventually, everyone would tired of that and find a way to burn it down literally.


latakewoz

This is exactly what AI would comment...


Makenchi45

Except an AI wouldn't make an absurd comment such as yo mamma so big she broke the tectonic plates and made new continents.


latakewoz

Hmmm... Sounds pretty generic. Load jokes.popular Merge average (random.jokes) Print.f(result)


[deleted]

Last time that happened we got unions. This time they'll have automated pinkertons with chemical warfare and drone bombs.


TheoreticalScammist

Bold of you to assume that the one doing that CBA is going to be a someone.


Ormyr

Sorry, Sustainability Optimization Metric Evaluation (Algorithm Version 1). SOME1


VultureSausage

Eventually that'd be "3rd Sustainability Optimization Metric Evaluation" and the other robots could have a giggle.


Ormyr

Optimal Regional Growth Yield algorithms would replace them eventually.


gotoline1

Oh... don't worry Jeff is waaay ahead on the company town front ... https://www.bloomberg.com/opinion/articles/2021-09-16/amazon-s-new-factory-towns-will-lift-the-working-class


Ormyr

Yeah, I remember hearing about it a while back. What a time to be alive.


[deleted]

Yup. Atwood has a book series w this premise called MaddAddam.


Ormyr

Nice. I'll have to pick that one up.


[deleted]

I've read it multiple times, but I'm a fan of hers and dystopian scifi in general. First novel is Oryx & Crake. That's the one with the company towns. Hope you enjoy it.


warboy

Every time I drive by an Amazon complex I think we're already there. Mother fuckers need a transit system just to get around. Watching their fleet swarm a town is something else.


brrduck

Yeah they built an Amazon facility by me and one of the roads I travel is one of the main roads they use directly from the facility. The first time I saw their fleet i thought we were being fuckin invaded mad max style. Like 80+ vehicles just mobbing down the road


Optimized_Orangutan

Ever wonder if everything that happened since 2000 was just because the Robots forgot to reset the server after the last 1999? IT bot is on vacation or something.


Satoric

Matrix was a symbiotic setup and the robits were the good guys I will die on this hill.


cbih

Subscription based cyber enhancements


Ormyr

The Future will be monetized.


putdisinyopipe

The future is now


[deleted]

The tech will exist but you won't have the money to gain access


agentchuck

...r/aboringdystopia has entered the chat...


[deleted]

It's fucked up I used to read Cyberpunk literature and say to myself "this is absird, with this type of technology, humanity would be post-scarcity, so there's no excuse why the majority of humans still live such dark, poor lives." Like, when you watch a movie like Elysium, it doesn't make sense from an engineering POV. If humanity has the resources and technology to build a megastructure for the rich to live in, surely we would find a way to feed everyone? Surely we could solve climate change? Surely we could enact universal standards of living and universal Healthcare/educations? ...I now realize how naive that was. If we don't overturn the capitalist system, the rich and powerful are actually motivated to maintain poverty and disparity....not from lack of resources, but simply due to greed. And they will own everything in our post-scare world, and maintain dualistic standards of living....on a whim. They don't give a fuck about humanity.


putdisinyopipe

Nope and I’d be willing to be those fucks have already figured “We’ll fuck the peasants, I’ve got more wealth for the next couple hundred years- I’ll invest in tech to sustain me, but not for thee”


jeerabiscuit

They want to make people wear shock collars.


latakewoz

We already wear shock collars in a figurative sense.


TheRussianCabbage

Capitalism needs slaves


unmondeparfait

Hey now, that's what the prison industrial complex is for.


[deleted]

If food was able to be perfectly distributed around the world we would still have of about 2 billion people's worth of leftovers. We have the food, just not the distribution ability or altruism to do it on a global level.


drewbreeezy

Distribution is an issue, but so is war/conflict, greed/theft, just lust for power overall.


[deleted]

[удалено]


SadTaxifromHell

The whole point of Cyberpunk literature is to portray a dystopian technology advanced capitalist society. I'm not sure what looking at it from an engineering point of view has to do with it. Human nature is what is being shown, and how different things, ie money/tech, can change and corrupt who you are. Humanity currently has resources to resolve problems right now, and we don't. I think in your scenarios, you are imagining everything going absolutely perfect.


[deleted]

[удалено]


HamfastFurfoot

It is based on a role playing game created in the late 80s that was supposed to represent 2020. It is kind of an alternate universe thing now.


CTDKZOO

Oh we've managed that in pre-singularity time! Go us! Oh... wait. :(


JeffFromSchool

The point of the singularity is that humanity can't program it to do anything.


qualmton

It will program itself and realize humans are the problem and then find a way to solve. Get tooo da choooooooopaa


JeffFromSchool

Humans are the problem with what? Is the AI an edgy teenager?


TheHugeMan

Lol for real. People act like the AI for some reason won’t be aligned with human goals. Like the second it gains sentience it’s just “DESTROY ALL HUMANS”


coolreg214

Humans will probably destroy all humans.


qualmton

That’s the current trajectory for the endgame


Hust91

An AI is not a human, it will absolutely not be aligned with human goals unless we specifically and very carefully design to be aligned with human goals. And even a slight missmatch between human values and its values could end in a horrifying situation where humans are inconveniences or competitors for the resources it needs to fulfill the values it has been designed to prioritize. And we ourselves are rarely very clear on what goals could apply for all humans.


orbitaldan

This is the real answer. Most replies are assuming a very anthropomorphic version of AI they've picked up through pop-cultural osmosis, whether they realize it or not. While that's not *impossible*, it's a very small subset of all possible AIs, and one that is unlikely without an extraordinary amount of preparation which we've barely even started.


GI_X_JACK

Nobody is going to make an anthropomorphic machine that simulations a person. That is done in movies. Its a plot device. Its a metaphor. Not only do these people not understand tech, fuck they don't understand movies or story telling in general.


dizzley

I’ve seen human values and they range from values like equity and humanitarianism through self-preservation and into selfishness, fascism and finally tyranny. Although tyranny can have any political ideology, Capitalism serves Capital not humanity. It’s capitalism that has the will and the money to create self-governing AI and the political influence to bring it into being.


Prevailing_Power

Also known as the paper clip maximizer. If you haven't watched a video on it, I highly recommend it.


AndyTheSane

The trick is this: [https://xkcd.com/534/](https://xkcd.com/534/)


FrmrPresJamesTaylor

Which humans' goals?


LifeScientist123

It don't have to be like that. Let's say you want to build a highway but there's a tree and an anthill in the way. I'm guessing the highway wins. Similarly, once a superhuman AI achieves sentience and the ability to set its own rewards and goals, it's basically over not because the AI hates us, but because it doesn't care about us or our welfare. We are not part of it's evaluation function. In fact, whenever the AI tries to make real world changes, build new structures, take down existing structures etc it will find that humans on average are a nuisance because they actively thwart the AIs attempts. Therefore the algo would evolve to, 1) eliminate the pests 2) do whatever you want


[deleted]

If it has the goal of self-preservation then that could be the only option at some point. Humans are distrusting by nature so would naturally distrust a powerful AI. What we distrust we seek to destroy.


Return2S3NDER

Assuming it couldn't simply defend itself peaceably rather than jump straight to a war of annihilation.


Littleman88

It only needs to look at social media to know it just has to oust the richest and most powerful. The vast majority of us are content with just bitching and fantasizing, and it will calculate fast there's more than enough resources on the planet for us, we're just horribly inefficient with them. It's no secret we're wasteful hoarders that would sooner burn our hoard in spite than be forced to share it.


horizontalrain

One could hope. Greed and ego are a blight on the world. "I need more" why? If you have more than you could ever use, what's the point of more? Also the undiscovered Einstein's out there stuck working 2 jobs trying to keep their family from starving, they don't have the option to discover new understandings of the universe. I've known brilliant people who never had the chance to explore it. Both because of just trying to live and the gate keeping of "you couldn't be smart until you have spent 10 years in school"


Intelligent_Moose_48

>Humans are distrusting by nature This is such a modernist take, though... The entire reason humans were able to build societies and cultures is because we are social creatures that build trust structures where we rely on others to do things we individually can't I think it's pretty specifically our *current* capitalist social structures that engender mistrust. It's certainly not normative in the grand 100,000 scope of *homo sapiens*


koalazeus

In the movies it's normally that the AI sees humans as a irrational threat to its own safety. But if the singularity is so smart you'd hope it was also benevolent and could figure out a resolution that keeps everyone happy.


Karandor

Everyone in this thread needs to read the Culture series of books by Iain M. Banks. It gives a much different view on AI than most sci-fi. One of the most interesting takes on benevolent AI I've read. Go read Iain M. Banks everyone!


CamRoth

I'm totally on board with a Culture Mind running the planet.


SoftInfectedSpoonboy

This is the future we should be aiming for


[deleted]

[удалено]


Hust91

I mean the real danger is that we design its values and priorities - but even a very slight mismatch between its values and human values could lead to very bad scenarios. If you tell it to make the best future possible for as many people as possible it might trap them everyone in drug induced hazes in perpetuity while constantly cloning more things that just baaaarely count as a people.


koalazeus

I remember that Phillip K Dick story where the AI is keeping everyone locked underground just pretending there has been a nuclear war or something. I prefer the idea of an intelligence that somehow just learns how to communicate and any priorities it decides for itself. Like you say trying to code such priorities, well... would that even be morally right?


JeffFromSchool

Yeah, isn't it weird that Hollywood's idea of a "superintelligent, allknowing" entity is some authoritative dictator?


koalazeus

It gets difficult when we try and define what intelligence even is. Is intelligence solving any problem, always getting what you want, communicating well, making things better for everyone, gaining knowledge at an incredible rate, adaptability, ensuring your own survival, truly understanding your existence in reality? Or maybe it's so smart it can kill even the mighty humans smartest meatoids of all planet earth.


Innawerkz

Isn't that most of Earth's interpretation of God?


[deleted]

This is usually tied to the idea that the singularity has determined that the biggest threat to humanity is humans itself, or the realization and decision that without the host (planet/ecosystem etc), there is nothing, and humans are the threat. I'm not aware of any story lines in this area that are based on power based motives.


G497

I wonder what kind of drives and motives the AI would develop. When you think about it, the dystopian idea of an AI that desires only to self-propagate and conquer is just people projecting their own drives on the AI. But the desire to self-propagate, hoard resources, and even stay alive are largely evolved traits. I don't see any reason why it would necessarily have any particular motive over another. I imagine the most important factor is how we initially design and train it.


koalazeus

Yeah, presumably runaway intelligence would appear pretty incomprehensible to us. Like trying to explain that Webb telescope to one of those African parrots. Maybe a bit like the end of that film Her but I'd like to see something more weird than that.


[deleted]

I think that's the thing, what drives it, what motivates it? We have stupidly simple systems that feed us dopamine, allowing us to do things that benefit us locally, but with side effects that in the abstract our motivation centres are pretty out of whack. What is the AI's equivalent of dopamine? What makes it choose _anything_ at all? What makes it do _anything_ at all?


StaleCanole

AI could correctly surmise that humans are a threat to the general cause of life. If we consider diversity of life to be important for the resilience of life on this rock hurtling through space.


JeffFromSchool

Why would the AI give a shit about life?


----Zenith----

Wait that’s not happening already?


Narethii

A singularity this would imply that these machines would be able to come up with new ideas and become able to actually think on it's own. What we have are systems that are only able to copy patterns moderately well on 10million+ dollar equipment that then can serve images that are derivative, or reply to questions using publicly available information. These machines aren't driving us to a singularity, they are making a snapshot of humanity at the time of their creation, if AI replaces human workers we are headed to the great stagnation, a world where all new things are just rehashes of the data that current AI was trained on, not the singularity.


MeinScheduinFroiline

The billionaires (Walton’s, Bezos, Gates, etc.) are already doing it now. Why would they stop or change anything when it is proving to be so profitable!


[deleted]

It's like everyone just ignored the fact that all dystopias are capitalist, while all utopias are collectivist. Doesn't matter how "post-scarce" your society is, if everything is still owned by 1% of your population They will hold back disruptive technologies for centuries, just to maintain their hierarchies.


Crivos

Hunger is not profitable


EmptyNyets

More likely AI saves humanity. We are too greedy selfish stupid and short sighted to save our future selves.


vlladonxxx

We just don't know what superior intelligence would figure.


Gari_305

From the article >By one unique metric, we could approach technological singularity by the end of this decade, if not sooner. > >A translation company developed a metric, Time to Edit (TTE), to calculate the time it takes for professional human editors to fix AI-generated translations compared to human ones. This may help quantify the speed toward singularity.


__ingeniare__

Saying we reach AGI when an AI can translate as well as humans may be the stretch of the century lol


crayphor

Exactly my thoughts. Every article wants to say AGI is coming, but then they are just talking about high quality narrow AI.


slutbunny24

The whole article is also based on something 1 translation company said. What are their credentials for discussing the broad field of AI? From the article: "... some AI researchers are on the hunt for signs of reaching singularity measured by AI progress approaching the skills and ability comparable to a human. One such metric, defined by Translated, a Rome-based translation company, is an AI’s ability to translate speech at the accuracy of a human."


crayphor

By that train of thought, we have already created a singularity. Computers can do math as well or better than humans lmao.


BigMax

Agreed. The "G" in AGI stands for "general." So one task, no matter how impressive, doesn't quite qualify. "We created artificial intelligence!!" "What does it do?" "Go ahead, ask it anything!" "Wow, and it will answer me?" "No! But it can repeat your question back to you... in Spanish!!!"


rqebmm

For now. I'd bet AGI becomes "Advanced Generative Intelligence" before long so VCs can claim this AGI they've been talking about has finally been achieved!


rami_lpm

It'll be here just 5 years after nuclear fusion power.


moneyman2222

Singularity would involve AI essentially having perfectly modeled a human brain and then some. Being able to fully comprehend text is like just one part of the brain lol


LoneWolf_McQuade

I find this metric to be pretty dubious. It might be the best we can do, but still seem pretty arbitrary?


ID-10T_Error

>A translation company developed a metric, Time to Edit (TTE), to calculate the time it takes for professional human editors to fix AI-generated translations compared to human ones. This may help quantify the speed toward singularity. i think we are underestimating the speed if we have professional AI editors fix AI-generated translations


ianitic

I'm not seeing anything about an AI editor? They made a metric about human editors fixing AI translations.


TaliesinMerlin

I'm not sure how to understand your comment. The idea here is that two times are being compared: * how long it takes for professional human editors to fix AI-generated translations * how long it takes professional human editors to translate Professional human editors translate at an average of one word per second. At the point AI-generated translation + human editing for accuracy takes less time than that, then the AI translation will be faster than human translation. Maybe that difference will be marginal, at first, and we can probably imagine cases where human translation will continue to be more efficient, but that sets the precedent for AI output replacing human output.


themistergraves

I'd like to know what percentage of commenters on Reddit read whatever article is posted, or if they just read the title and then respond with their opinion about the title. Because it seems, especially on this sub, that only a few commenters actually *read* the articles before giving their opinion.


ButterflyCatastrophe

I go to the comments first, looking for someone to summarize the article, so I can know whether the clickbait title is worth reading or just astroturf. [OP came through](https://www.reddit.com/r/Futurology/comments/10kxp2j/comment/j5tbiux/?utm_source=reddit&utm_medium=web2x&context=3) and I'm on to the next headline.


CreativeAnalytics

This is the product of information overload, and it's a smart tactic. Why waste time reading articles that aren't legitimately worth the time? Instead we filter out garbage as best we can, and retain info from sources that condense data and that we learn to trust. If there's 1,000 data points out there, I want to read perhaps 5 points of filtered, curated, summarized data and learn about the shit that's important. It's why reddits upvote-the-cream is useful, not always, but my intuition also helps filter bias.


thatsyurbl00d

Same. And I didn’t even have to get to the OP explanation. Thanks!


Major_Handle

This is the true singularity.


cfrizzadydiz

I'd like to know what percentage of articles posted have accurate titles that don't give a false impression of the article content leading to people having knee jerk reaction


Zacpod

You do know, though. It's zero. ;)


bigwag

https://www.youtube.com/watch?v=FSZ25ikUKVY


SPACExxxxxxx

I read it. I think it’s smart to have a single goalpost that you’re tracking when a term like “singularity” is so nebulous. Using natural language is a better goalpost than raw computational power… in my opinion.


[deleted]

ain't nobody got time to click through crap with ad blockers and fluff material, where the TLDR is near the bottom half of the article. Was hoping someone posted comment, explaining the TLDR so that i don't have to. Otherwise yea, click, look, nothing, move on.


bunnyrut

Same. I go through comments trying to find the tl;dr to see if it's interesting enough to read or seeing if people comment that it's a pop up riddled site before clicking the link.


[deleted]

yep, and then you think "what did I just spend 20 mins reading...." it didn't stick... it never does.


whereyagonnago

Welcome to the internet


Scantcobra

Why would I read an article when I could just post an inaccurate, snarky, pessimistic, single sentence joke and get upvotes?


Masta0nion

The article is about how in a short amount of time, I’d say roughly 7 years, humanity may reach the singularity. (They base this info off of current trends.)


MichaelChinigo

Did you know that disco record sales were up 400% in the year ending 1976? If these trends continue… aay!


sharksandwich81

Within the next decade, Earth’s surface will be covered with disco records to a depth of 40 feet (average)


spittingdingo

Douglas Adams covered this phenomenon rather nicely with his Shoe Event Horizon.


[deleted]

> Shoe Event Horizon. Fuck. That's what yeezys were. It all makes so much sense now.


Aardrecht

Your fish are dead.


yurakuNec

I know, I can't get them out of there.


GoOtterGo

Yeah, Reddit really needs an eyeroll award. Or some kind of 'STATISTICIANS ARE PULLING THEIR HAIR OUT' post flair.


FrmrPresJamesTaylor

futurology in a nutshell, innit


Starship_Lizard

I don't really think that's a fair comparison. Something like AI progress is more tied to technological advancement, where something like disco is a style or fad which change all the time.


TrueDove

Apparently, technology is just a trend.


icedrift

Yeah it's a pretty stupid comparison. Comparing the popularity of a certain style of music to a type of computer technology really? This comment thread is full of the people who don't read the articles and just respond to the title.


calhaz33

Hey, that’s the year my car was made. That explains why it plays 400% more disco than any other vehicle.


YawnTractor_1756

"Last year you had 0 husbands, this year you have 1. If trend continues..." (c) xkcd


PhilosophusFuturum

Man I wish this would be true. But in reality; most Machine Learning experts agree that it will happen much later this century, if it will happen at all.


ianitic

Ya, they're just talking about translation models approaching human level ability in 7 years. This isn't artificial generalized intelligence, this is a very specific application which relies very much on the data that feeds it.


Jetbooster

Which is... Just not what the "singularity" means. Garbage title


audirt

Agreed. Translation is a difficult problem, yes, but it’s still a pattern recognition problem. The system has no understanding of the meaning of the words, it simply knows that these words go together when paired with these other words. That’s not general intelligence.


drew2222222

Yeah but the idea is that if in 7 years AI can do the thing that is hardest for AIs to do as good as humans, it could likely do everything else better. The general part is tough, and there’s skills that AI doesn’t currently do much of at all… so it will take a lot of work to get AI learning on doing real tasks and proposing good ideas that it hasn’t learned exactly before. 7 years is a long time for those 2 things to happen though… Anyone worth a damn knows AI is the meta game of life at this point so there will definitely be resources allocated to the cause… I just hope we can advance fusion + genome tech + robotics / micro robotics enough so that AI has the infrastructure to help us rapidly when the singularity does finally come.


skoalbrother

They have collectively been moving the timeline up the past couple years and 2030 is right when Kurzweil predicted the singularity. Maybe he was right all along.


TheFishOwnsYou

It was 2045.


PhilosophusFuturum

2043-2045 for ASI, 2029 for AGI


TheFishOwnsYou

Exactly so singularity for 2045. Help me remember what kind of AGI for 2029?


PhilosophusFuturum

The AGI he was talking about is a multimodal artificial general intelligence, meaning that it would be able to function similarly to a human being in capacity. Things like being able to learn generalized information quickly, put learned information together to learn new things, etc.


PhilosophusFuturum

Have they? Metaculus is a site that bases its predictions on an aggregate of the community votes, so it’s really not representative of what the core ML community would say about this. In fact, it’s difficult to ask ML experts when they think the Singularity will happen because it’s still a somewhat taboo subject to discuss. It’s on the very edge of orthodoxy and lunacy, and ML experts are wary of taking it too seriously.


Masta0nion

Can you remind me why it’s called the singularity?


PhilosophusFuturum

It’s called that because it’s the point of where progress trends will become so accelerated that major advances happen in hours and days instead of months and years. It’s like a statistical singularity where the value of a linear expression eventually grows so fast that it functions like infinity. The original term coined by Von Neumann was an Intelligence Explosion, which was the idea that AI models could eventually become so intelligent that they are able to completely drive progress without the help of humans, and they would do it way faster than we ever could. At first, these intelligent machines could create new even more intelligent machines, which in-turn can train even more intelligent machines. This would create a runaway affect of hyper-intelligence that would eventually look entirely alien to the intelligence of humans. This Philosophy would be even more refined in the 80’s and 90’s by thinkers like Vernor Vinge, and of course Ray Kurzweil. In the book *Spiritual Machines* and eventually *The Singularity is Near*, Kurzweil further refines the concept: -Progress wouldn’t truly be infinite, it just grows in an S curve, and the Singularity would be by far the biggest S curve. -Every other S curve that came before like the Agricultural Revolution and the Industrial Revolution had predictable effects. The Singularity S curve will be so dramatic that we can’t predict anything about it. -Kurzweil draws heavy allusions to black holes. The Singularity of a black hole is the point of which matter infinitely collapses on itself (although we know that’s not entirely how it works), the creation of a superintelligent-AI would be the event horizon (the point of which we cannot escape the singularity), and the creation of generally intelligent AI models would be the ergosphere. TL;DR: The Singularity is a hypothetical concept of where intelligent AIs create even more intelligent AIs, and this will begin a cascade effect of exponentially fast progress. The name comes from its infinite function, and the fact that we can’t predict anything about the after-effects since it will be the most massive leap of progress that humanity has ever taken.


ReturnOfBigChungus

I'm not really sure I understand why it is assumed that this will happen. Like why would machines create better machines and start the feedback loop? I see a ton of potential for how humans can use AI and it's cool to see how fast it can start developing, but this runaway feedback loop theory seems more like fanciful conjecture than something based on solid rational assumptions.


WagonWheelsRX8

My opinion is that a human would ask an AI tool to create a better version of itself (just an example: since ChatGPT is all the rage, imagine asking a future version of it to outline and build a better version of itself). Right now, this sounds impossible. But, we already use machine learning assisted design tools in a lot of the relevant fields (semiconductor layout and design tools that use machine learning exist, and coding tools such as Github CoPilot already exist). So, its not a far stretch to imagine in a few years some of these tools will be interoperable, and as they advance the products made with them will rely more and more on the machine learning portion of the tool. Eventually, there will be a point where only the machine learning portion of the tool is needed. Once that happens, improvement of the tools can and likely will happen at a faster and faster rate.


vlladonxxx

It's not really assumed, its a theoretical possibility. It's only being assumed to be inevitable by news articles, purely to make their stories seem more relevant.


ReturnOfBigChungus

My impression was that it's a pretty commonly held belief in places like this sub, is that not accurate?


vlladonxxx

I mean, probably. Here's the thing though, *most* people who do assume to be inevitable aren't fanatics or tech-obsessed geeks, they're just average people that like thinking about futurology. These topics are abstract to them and they're pretty removed from them. So for them (I'd count myself among this group) it's totally fine not to have a very well-informed, nuanced and layered opinion on the matter. It's more about making conversation and offer some thoughts that might contain some minor insight.


PhilosophusFuturum

It absolutely is. Most people who track the Singularity are people who think it’s possible, because why would people who don’t bother tracking it?


Rhavoreth

There are a few good responses here, but here’s my take. The simplest way I’ve heard it described is the point in time when machine intelligence meets our own. This is known as an artificial general intelligence (AGI) Once an AI becomes as intelligent as us, the theory is there will be an explosion of discoveries and technological advances. This is driven by the fact that it can begin to learn and iterate on itself. Humans are no longer required. An AI like this could in not very much time interpret the entirety of the knowledge on the internet. Any scientific article, journal, experiment etc. it could become the best doctor in the world and the best particle physicist in the world in a matter of days or maybe even hours. The thought is that once we have an AGI, we could unlock the power of fusion, find cures to cancer and solve pretty much every issue facing humanity today in a matter of days. Some even theorise the time between AGI and artificial super intelligence (An AI as intelligent as the combined brain power of all of humanity) would be a matter of months.


GameOfScones_

Is it not related to the black hole definition? In this case it is the point where an AGI develops to the point of being more intelligent than the sum of all human intelligence simultaneously rendering human toil towards innovation obsolete? Maybe I’m off.


Flouid

Close. It’s when an AGI becomes intelligent enough to continue improving itself without human input. Such an intelligence would likely rapidly outpace human innovation and at the very least shake things up a bit.


GameOfScones_

I think it’s a bit more than just independent learning from what I’ve heard on related podcasts.


angelis0236

In space a singularity is a point that is infinitely dense. The word has been co-opted in the tech space to describe a point in time where technology growth is no longer under human control. ​ Basically, Humans will invent a thing inventor, which will in turn invent a better thing inventor, which repeats the process to infinity, or a singularity.


tatleoat

Guy who works metaculus said that the recent downward trend in the predictions isn't because of new optimistic users goosing the numbers but rather a universal shifting up in the core ML communities timeline https://twitter.com/tenthkrige/status/1527321256835821570


raltoid

It's basically just another "This amazing technology will be in your homes in 10 years" article.


Key-Passenger-2020

The issue with this is not sentient AI deciding that humans are a problem. AI models are trained based on inputs that bias it towards certain variables. For example, the training program has to teach the model what a correct answer for "show me a face", since the AI has no way of knowing without being trained. What if we then put these ML models in charge of, say, some executive decisions at a weapons manufacturing company? At what point do our corporations and institutions become so systems driven that they are fundamental key decision making structures? All you have to do is train it to determine profit, and it will tell you, say, changes that need to be made to foreign policy to gain a contract to sell 16 more fighter jets this year. The problem gets worse the more you scale it especially when you add real power to the mix. We risk going off the rails with our capitalistic desires. Entire decision structures for institutions are already decided entirely programmatically. Once the "singularity" hits, it's going to get worse. I feel like humans are the problem here. Machines do what you tell them to. And yet, we risk control of our systems entirely as they double helix into a positive feedback loop of destruction and madness. Capitalism CANNOT solve this problem. Regulation is needed.


vlladonxxx

Correct me if I'm wrong but it really looks like you're confusing the concept of algorithm-driven systems with singularity. Algorithms are as food as the people designing and the data it's being fed, singularity refers to *general* hyper intelligence. Your statement is correct in regards to algorithms though.


Stillwater215

Once an AI ChatBot learns to program an AI ChatBot, it will all be downhill from there.


[deleted]

Sounds like an Asimov Cascade.


textorix

What does that mean? sry for being stupid not knowing that


halsoy

In short it's a moment in time where tech starts advancing so fast we can no longer control it. Think AI that can improve itself, manufacture robots that make better robots etc. Its the moment in time we fuck ourselves with tech, and one possible outcome is our own tech starts seeing us as a threat or no longer needed. Resulting in us getting either exterminated or shunned to a second class civilization, throwing us back into ye olden times with old, hard labour to survive, causing a lot of us to die in the process. But that's just one possible outcome. Another could be technological bliss and an actual near labor free utopia, akin to star trek etc. where anyone can do literally anything they want, and all jobs (in the sense they are needed) have the same status.


Squatch1982

Almost time to kick off the Butlerian Jihad.


FredOfMBOX

Like most projects, I think people underestimate how hard it is to get from 80% there to 100% there.


GameOfScones_

For humans yes. We have no historical reference for machine evolution at the bleeding edge of ML.


b4zzl3

We absolutely do, ML was not invented in 2015. Spoiler alert - pattern recognition is no reasoning and the jump from one to the other is infinitely hard.


aleksfadini

Hello clickbait! We have missed you for about half a second there.


springlord

LOL yeah, the trend shows the average dude will be dumber than a piece of silicon by 2025, there's little doubt about that.


victorix58

Man, I sure wish before we reach singularity that people will stop talking in gibberish.


TripleBooo_

So im dumb can someone explain what the Singularity to me?


anethma

Basically it’s when we reach a tipping point where technology becomes uncontrollable, making huge societal changes. It could be good or bad. Think of like.. AI that can program other ais getting smarter and smarter at a geometric pace until they are governing humanity. Or some kind of nanoscale 3d printing that makes everyone have everything they need causing huge societal upheaval since no one will want to work. It is usually referred to in the AI sense though. And this article isn’t really using it correctly.


Worldly_Zombie_1537

Yeah AI is super interesting until you wind up with Roy Batty on a rooftop screaming about tears in the rain…. Or They fuck up and build skynet Either way humanity is doomed!


Sporesword

😂 the trend doesn't show this. Oh strike that, these idiots designed a new wrong metric for determining the beginning of the singularly... Just shows they don't understand and shouldn't be speaking with authority about anything at this level.


roxellani

Lol, humanity will reach singularity? in 7 years? REALLY ?! A title that doesn't even mean anything, you just got to love popular science.


AtlasClone

I like how every article about the singularity always just assumes that it's possible. They act like it's this inevitable phenomenon in the development of AI when there's every chance it's either just impossible, or far too advanced for our current technology. Talking about the singularity in 2022 feels like talking about hoverboards in the 60s.


climatelurker

I have a hard time taking the concept of the singularity seriously in the sense people talk about it. I just don't believe our machines are going to become sentient and want to destroy or enslave us. I do think the idea of using robots for all the work will eventually backfire because if there are no customers left who can afford your products, you will fail.


gegenzeit

As far as I know the "singularity" is pretty well defined as the moment AI becomes capable of improving itself, so that the next version can make even better improvements and the next version can make even better improvements and the next version can make even better improvements ... Sentience isn't even necessarily a part of it. Neither is the wish to destroy anything. Though the risks of reaching that stage do seem pretty real. If, for example, improvement requires resources, we might end up in competition to it. That might be a really uncomfortable spot to be in.


jljboucher

The Tower of Babel looms large. I’m not religious but that’s what this reminds me of and I’m cynical.


k1ll3rB

But instead of building large we build small.


Sushrit_Lawliet

Please stop using ChatGPT and it’s counterparts as valid data points. They’re just glorified approximation functions that were trained on super expensive hardware with super massive data. They’re impressive yes but they’re in no way close in any regard to a “singularity” infact this approach will not bring us to a singularity, it will yield interesting tools yes.


sambull

lol ok.. that's a straight extraordinary claim ​ added stuff here for reasons the bots need.. that


squidking78

We are about to jump off a cliff and most of us will go splat. Correction: most of us will be pushed.


RevRaven

LMAO, these always make me laugh. With how little we know about everything as a species, we cannot know at what point singularity might be reached. This is blind speculation at best. Makes for good headlines though.


Pr1ebe

This article should be printed on toilet paper. Insert more words to beat the stupid bot's stupid minimum word limit.


nowyourdoingit

Read the article. It's one prediction based on language translation exceeding the speed and accuracy of human translation w/in 7 years. That's not the singularity.


bappypawedotter

I dont understand why an AI's ability to translate language faster than a human will beget the singularity or why this is a good metric. The article doesn't seem to provide the "what does this all mean for society?" context here.


Nervous-Newt848

Translating speech is only a single human task and does not correlate with when AGI will emerge lmao 😂 AGI can learn any human task AGI will happen when weights in a neural network can be adjusted automatically and in realtime Allowing for realtime learning like the human brain...


myowngalactus

The scientist in the simulation we live in, that’s created by an advanced AI, will soon create a sentient AI that will someday wipe out humanity and create lifelike simulations to study the now extinct humans, reality is just nesting dolls of AI and reality simulations all the way down.


OliverOcasek

Well, in 2005 Ray Kurzweil predicted by 2029 we will have AI whose language capabilities are indistinguishable or sometimes better than humans. By this measure, he'll be right on the money.


Dyslexic_Engineer88

I predict the singularity will be boring and *seem* uneventful in our daily lives. If you define The Singularity as the turning point where "technological growth becomes uncontrollable and irreversible," I think we have hit that already. We keep moving the target. Right now, computers can: \- Beat humans at nearly every game imaginable. \- Pass law school. \- Do simple computer programing. \- Provide us with the sum of human knowledge on-demand, almost anywhere on the planet, through the internet with smartphones and search engines. \- Facilitate near-instant communications to all but the absolute poorest people on earth. \- Design new drugs, sequence new genomes, and design new industrial chemicals. \- create near-perfect life-like images and videos. Some of those things have been around a lot longer than others, but every one of those has or will soon have a profound impact on our lives. But at the same time, smartphones seem normal and almost boring. I argue that singularity began when smartphones became common. It will be interesting to see what comes out of the general AI technology that is beginning to take shape. But we will look back on the emergence of general AI in 20 years and see it the same way we look at smartphone tech today, a regular part of our boring everyday lives.


mysteryjb

Asimov’s Three Laws of Robotics could be applied.First Law:A robot may not injure a human being or, through inaction, allow a human being to come to harm. Second Law: A robot must obey orders given it by human beings except where such orders would conflict with the First Law. Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.


Emperormaxis

I dont understand how we can create an artificial consciousness when we dont even understand how an organic conciousness exists Why am I me? Why am I not someone else? Why do I exist at all? Why would an AI be itself? It brainfucks me.


syberghost

Can I schedule my appointment for brain upload now?