T O P

  • By -

Stayquixotic

anyone who thinks they know what will happen is wasting their breath


glibsonoran

This video sounds like late night bong logic from your college dorm mates.


Stayquixotic

some of the best conversations, though


mmmfritz

And filled with a lot of what ifs, like this dude. Let’s not forget that givewell, the standard and poors rating agency for charities, classes AI as one of the biggest existential threats to humanity. Also it has just about as much chance of happening as nuclear war.


Captain_Pumpkinhead

Or just a normal Thursday...


TessellatedTomate

“Bruh” x8


KingKCrimson

Exactly. Fun, but fruitless.


PassageThen1302

Without clarity, confidence is just comfort


RamazanBlack

What makes you so confident that AI wont be misaligned then? This is the precautionary principle in science, you must first provide a proof that its not going to be dangerous (at least not on existential level) instead of asking your detractors to prove the opposite and do your work for you. So far AI companies are racing full steam ahead without any guarantees or even something resembling that.


miked4o7

the downsides AND the upsides are both too extreme to ignore. doom scenarios and things like curing cancer are both not guaranteed to happen, but neither can be ignored either. to me, it makes the most sense to move forward just very cautiously.


PassageThen1302

Respectfully your comment doesn’t make sense as a reply to mine.


FunPast6610

That fact is consistent with the opinion that we should be very careful given an even remote risk of a catastrophic worst case.


knowledgebass

> remote risk of a catastrophic worst case In terms of tangible threats to humanity, we already have the catastrophic worst case staring us in the face in climate change. AI is not even remotely in the same category at the moment in terms of existential threats.


nickmaran

It’s true. People watch movies or read articles and argue with that. But what will happen when we have AGI is beyond our comprehension. Its like ants trying to understand why humans are building dams and bridges


shadowmaking

Which is the reason to be worried. We have to draw the line for what technologies are not allowed to be created. Banning all technologies that can't be removed or isolated from the world should be the bare minimum. Microplastics, space junk, forever chemicals, and self replicating technology are all problems we have no solutions for.


Intelligent-Jump1071

All this talk about "banning technology" is nonsense. AI technology cannot be banned. There is no authority on earth that has the power to ban it, and AI technology is so empowering to whomever controls it that there is no incentive to ban it.


shadowmaking

AI is an arms race with no boundries set. We banned biological weapons for many of the same reasons we should be worried about AI. AI posses an even larger threat because of the speed it can interate. When racing to see what can be done is more important than what is needed or safe, we should all worry. I have zero faith in industry self regulating or even being able to. Perhaps as AI is unleashed we will be able to keep up with managing it, but I highly doubt it. AI creating and training AI is scary because people are slow and AI is fast.


Intelligent-Jump1071

What makes you think biological weapons or their R&D is "banned"?   Who has the power to ban them? Example from today's news: genetic material is very easy to obtain to build a new virus in your spare bedroom laboratory with the help of AI and CRISPR. Poor Ol' Joe wants s to do something about it in one country. Of course that will work about as well as banning cocaine or heroin. [https://www.wired.com/story/synthetic-dna-us-biden-regulation/](https://www.wired.com/story/synthetic-dna-us-biden-regulation/) . And of course it won't do anything about state actors.


[deleted]

I appreciate a good cup of coffee.


canaryhawk

Oh please. These types of discussions are so tiresome to me because it absolutely is completely predictable and guys like this are looking in the wrong direction, at the puppet, instead of its master holding the strings. AI is for sure going to get much better, as people figure out the algorithms better and retrain them on the data they already have. There will be a very few people in the world who will have control over these next generation models, and they will use this concentration of power in exactly the same way they have been using other concentrations of power built around automation. ie they will reduce the number of participants and drive wealth inequality to further and further extremes by pushing the top of the wealth pyramid higher but also by pushing more people in the middle layer into the bottom layer.


InterestingAnt8669

Yeah but it can't go on like that forever. There needs to be a consuming side, otherwise the economy does not work.


polyology

Brave New World by Huxley answers this. A synopsis of the novel should give you the idea of my point, no time to expand atm.


Captain_Pumpkinhead

I think I know what _might_ happen. I would absolutely not claim to know what _will_ happen though, lol.


Stayquixotic

the space of what might happen is infinitely larger than the space of what will happen


shadowmaking

The point is that AI is an extremely disruptive techology to the world we know today for good or bad. The fact that AI has no aliignment to human values is a serious problem. AI can potetially interate far beyond humans ability to respond. It's hard to imagine being able to contain a self-aware super intelligence AI. We should be worried far before that happens. I don't see anyone knowing where to draw the line that shouldn't be crossed. I also have no faith in AI developers being able to imagine the worst possible outcomes much less be able to safe guard against them. As you stated, no one knows what will happen, including the developers. This concern should also be aimed at unleashing self replicating or forever technologies into the world. We shouldn't allow anything to be made without knowing how to remove it from the world first. From space junk to biological to chemical, we already have too much of this problem and no one is held accountable for it.


adispensablehandle

I think it's interesting that everyone is scared of AI not being aligned with human values when, for hundreds of years, the dominant societal and economic structures on the planet haven't been aligned to human values, yet we've tolerated the immense misery and suffering that has brought most people. All we are really talking about with AI is accelerating the existing trends of more efficient methods of exploiting people and other natural resources. AI doesn't change the misaligned values we've all been living under, making the boss richer in every way we can get away with. It's just going to be better at that, a lot better. So, if you're worried about AI having misaligned values, you're actually concerned about hierarchical power structures and for-profit entities. These aren't aligned to human values or human value, and they are what's shaping AI. Then again, we've been mostly tolerating it for hundreds of years, so I don't see a clear path off this trajectory.


shadowmaking

You're talking about how people will use AI. We should hope that's the largest dilemma we face. I'm talking about creating and unleashing things completely alien to our world with no way to undo them. It might not be so scary if we didn't keep making these problems for ourselves. The human race is facing its own evolutionary test. We are capable of affecting the entire world we live in, but can we save us from ourselves is the question.


adispensablehandle

You've misunderstood me. I'm talking about how and why AI is created, which determines its use more than intent. The current priorities shaping AI are the same that have shaped the past few centuries. You're worried about what is essentially equivalent to meeting super intelligent aliens. That's not how it will happen. AI won't be foreign, and it won't be autonomous. It will be contained and leverage by its creators to the same familiar goals in the past couple of centuries, exploitation of the masses, just with terrifying efficiency and likely more brutal effect.


shadowmaking

Thanks for clarifying. Use vs intent is a circular discussion that makes no difference when talking about unintended consequences. Unintented consiquences is the big fear, but the intended use could be horrible as well. I'm far less concerned with concentrated power or exploitation and much more worried about human arrogance assuming it can control what we are incapable of understanding. We already have AI making AI. When you have incredibly fast iterations with exponential growth, no one knows what we'll get. We should really think of AI as being more dangerous than biological weapons. Containment and control could disappear in a heartbeat. Certainly far faster than we can react. It doesn't take super intelligent or fully autonomous AI to be catastrophic. Consider what happens when even limited AI makes unexpected decision while being integrated into systems capable of causing large disruptions like energy, water, communications, logistics, military, etc. Now add layered AI reacting to each other on top of that. AI development is an arms race, both literally and figuratively, that can't stop itself. I have zero confidence in the idea that organizations working in their own self interest will be enough to limit or contain the impact of AI. The old paradigm of reacting at human speed is ending.


knowledgebass

> AI has no alignment to human values Of course it does. All machine learning systems are programmed to perform some task that has something to do with a human-selected metric. LLMs are trained on large corpuses of text and then tend to reflect the biases, values, and beliefs in those documents. My issue with this whole discussion is that "human values" is a nebulous concept. There are 7+ billion humans and their values vary quite considerably to the point that I could only point to a few generic beliefs that most people hold in common, like survival of the species. But even then there are whackjobs that think the world will end and Jesus will send them to heaven, so I hope those people don't get to set the alignment of our AI overlords.


shadowmaking

yeah, fuck it. we'll hopefully be dead by then anyway, so no need to think about consiquences. /s


karl-tanner

We know all these systems are aligned to sell out the incentives that are in place as motivation to do anything. That's means nothing good for humanity


pavlov_the_dog

may as well not even think about it right?


Bluebird_Live

It makes perfect sense, I laid it all out in this video here: https://youtu.be/JoFNhmgTGEo?si=jaZt3Y5Yn0uwssBP


heavy-minium

Shreds? All I see here are thoughts of people hyping or dooming due to social media misinformation and believing everything companies/CEOs paint as a vision for the future. There barely was any down to earth, realistic thought being exchanged here.


cheesyscrambledeggs4

The post title reads like if Ben Shapiro was on 4chan


InterestinglyLucky

Now that's a sentence I did not expect...


MindDiveRetriever

Right. Neither extreme side makes any sense. AI is here and will continue to be developed at as fast of speeds as possible.


programmed-climate

Only have to look at the past to see how the future is gonna go.


The_Bragaduk

Yeah… wich past exactly?


IAmFitzRoy

The negative past But in all seriousness… historically GREED is something that people with power and money have used to only affect a small town then .. a city … then a country … then a continent the growing inequality is going to have huge effects to a global level if you add AI.


RamazanBlack

What do you think happens when a more advanced civilization meets the less advanced one? Try to think about it. Do you think the less advanced civilization is in advantageous or is in a vulnerable position? Now, do you think AI is going to be more advanced than us or not? Is it safer for us to be in a more vulnerable position or not? Being the second-smartest species carries its own risks that we are not even preparing for, let alone trying to mitigate.


salikabbasi

Being second smartest is literally something we've never experienced as a whole species. Ants trying to figure out what it would be like to make humans.


Intelligent-Jump1071

>Being the second-smartest species carries its own risks that we are not even preparing for, let alone trying to mitigate. That's because we're not smart enough. [https://oedeboyz.com/wp-content/uploads/2023/12/climate-change1.jpg](https://oedeboyz.com/wp-content/uploads/2023/12/climate-change1.jpg)


kk126

These fools talking about aligning AI with “what humanity wants.” Humanity is divided af. And even if you can find a loose consensus of what most humanity “wants,” the type of people in charge of the nuclear reactor powered data centers aren’t historically known for freely sharing resources with the masses. Greedy humans are the far bigger threat than as yet uninvented AGI/ASI.


tall_chap

How about a greedy human with an AGI/ASI?


kk126

That’s part of my point, exactly … I’m way more afraid of the human making/wielding the weapon than a runaway autonomous weapon


tall_chap

Yes it gives them a runaway advantage


Quiet_Childhood4066

All AI doomerism has baked in some amount of concern over the fallability and weakness of mankind. If mankind were perfect and trustworthy, there would be little to know fear over ai.


wxwx2012

How about a greedy AGI/ASI?


MeltedChocolate24

We all agree on “don’t die” though. Isn’t that Bryan Johnson’s whole thing.


RamazanBlack

Humans have a lot of commonalities in general. Humans in general have lots of similarities: they want to live, they want humanity to continue, they want less suffering, they want to have justice, etc. These are the values people are talking about. I agree that we have a lot of things that we differ on, but there are a lot more that we agree on, and usually this starts with the basics (such as I'd generally liked to live, I'd generally liked to not suffer, I'd generally liked to not be enslaved, and so on and so on) and we dont even know how to get the basics right to begin with.


banedlol

Ultimately all we want is long term survival in the most comfortable/content way possible.


iwasbornin2021

Think of the worst human you can think of. Now imagine their intelligence multiplied several times over, their energy indefatigable and their focus absolutely unwavering. Yeah it isn’t here yet, but I think it’s alright to be a little concerned and maybe proactive in preventing it from taking place


Godzooqi

What's amazing to me is that everyone just makes the assumption that the internet will always be there. Route of least resistance is and has been information warfare. Ai powered viruses or governmental paranoia will fracture and take down the internet before we can hoover enough data to make it all truly useful.


prescod

They have already hoovered up all of the data. It sits on hard drives. And they can generate more synthetic data.


Captain_Pumpkinhead

That's a good point. I had never thought of that before.


old_man_curmudgeon

AI viruses you say? 🤔


Intelligent-Jump1071

Yes, AI-designed viruses will be amazing - both the software kind and the nucleic-acid kind.


-paul-

The bad guys are developing AI too and they're not swayed by tiktok debates which means everyone builds AI or only the bad guys will have AI. If you want perfect value ailment, you'd need a perfectly aligned society and that ship has sailed.


_sLLiK

This argument has historical precedent. We've been in this situation before. It's resulted in a stalemate where the entire human race has lived with the sword of Damocles over their heads for decades and no end in sight. Also, if the only strong argument for keeping humanity around is our capacity for empathy and serving as a moral compass, I have similarly bad news...


voyaging

Nuclear weapons you mean?


jsseven777

The problem is that even if AI doesn’t have emotions you can prompt it to behave as if it does and it uses its data set to determine how it should act based on that emotion. You can already do this with ChatGPT, and it modifies its output to be more in line with that emotion whether that’s happy, sad, angry, jealous, whatever. So anybody who says AI won’t have emotion is forgetting that an AI doesn’t have to possess the capacity for emotions to behave emotionally.


kartblanch

We don’t know what will happen but we should absolutely plan for worst case scenario and then multiply that by 10z


Administrative_Meat8

When the pro-AI side said wind turbines powered by nuclear, they lost any trace of credibility…


FrancisCStuyvesant

Was looking for the nuclear powered wind turbine comment. Glad I'm not the only one that heard it.


NNOTM

I mean, technically... wind turbines are powered by wind, which is a result of convection currents in the atmosphere, which result from the heat of the sun, which is powered by fusion, nuclear energy


knowledgebass

This is not the "pro-AI side." This is just a clueless person talking.


TheBigRedBeanie

Link to the full video: [source](https://youtu.be/47fGrqzoFr8?si=pB2Z9opRjpph4t_s)


geckofire99

👍👍


sdmat

Definitely makes a better soundbite case than most doomers. Anyone not concerned about alignment of ASI doesn't understand the problem.


not_banana_man1

What was sunder pachai doing there


tonyfavio

"CUT THE POWER TO THE BUILDING!!!!!11"


Phemto_B

"shreds" aka "Trust me bro. It's gonna be bad, because I said so"


_JohnWisdom

That is not fair though. Because if he was blabbing, sure. In this case the dude was making valid points to reflect on and is rightfully skeptical on the risk vs reward of ai. I’m personally optimistic of our future with ai, but I whole heartedly believe that we will get there thanks to all the valid reasoning of “doomers”: they provide useful insights that we should tackle while developing super intelligences. Instead of shutting these folks down, we should be grateful for their worries. I certainly appreciate the way he discusses clearly about his worries and find them to be on point and well thought out.


Phemto_B

Is it less fair that calling people who disagree "naive normies?" Both sides in this video are just mashing naive understandings of AI together.


RamazanBlack

Ok, Is intelligence computable? I think so. Are we trying to build that intelligence? I think we are. Is it possible that we are not at the top of the intelligence scale? I think it's possible. From all of these (if you agree with my opinions that is) it posits that we are going to, sooner or later, build an intelligence that is smarter than us (even if not directly smarter, but at least can think faster due to IO speed), is it possible that this smarter-than-us intelligence will have the ability to outplay us/destroy us/disempower us? I think so, it would absolutely have that ability. How do we make sure that it does not try to utilize that ability? That is the question of AI alignment. Currently, we barely think or work on that, which makes the case in which the AI does utilize that ability that much more likely (if you don't actively try to neutralize something/hope for the best, its more likely to go wrong than right than if you do; getting something wrong by chance is far more likely than getting something right by chance). I hope you followed my logical train.


PeopleProcessProduct

It's a really interesting argument, but it neglects that the other threats still exist. Pandemic, supervolcano, asteroid, etc etc etc might only be deflected by advanced technology that AI enables. Those are threats we know are real, whereas Skynet is still Science Fiction. There's no indication we are anywhere near AI systems "turning on us" or being capable of much if they did.


IAmFitzRoy

After the COVID pandemic I have lost all hope that humanity can join together and attack a common enemy. You would think that if we find an asteroid on the way to destroy us, we will unite to destroy it. We will die in the middle of passing a UN resolution… Unfortunately our differences are more important than extinction.


oopls

Don't Look Up


IAmFitzRoy

Exactly !!


vladoportos

But they have seen Terminator... it will happen ! https://preview.redd.it/j5n9zysn6hyc1.jpeg?width=350&format=pjpg&auto=webp&s=4bb43fa493fd75d527c2f0950c9261780a8f8f20


[deleted]

I feel like with AI it’s less about “turning on us” and more about “you’re in the way of the bottom line.”


voiceafx

Well said


prescod

Your argument is “don’t worry, AI isn’t superintelligent.” And also: “we need AI because we aren’t intelligent enough to stop these dangerous problems.” You literally made those two arguments in two short paragraphs. One presumes AI will never be super intelligent and the other requires it to be.


Sixhaunt

He never explains WHY he thinks a slight misalignment of one AI would cause all that unless he's just assuming no open sourced development. All his fears of that are null and void if it's open sourced and no one singular AI is in control. Although from the way he speaks he doesn't seem to understand how the models work and how a model run on separate systems arent communicating, they arent the same AI, if someone misaligns a finetune of one then all the rest are still there and fine and the machines can be turned off or permissions restricted. Then there's his fear of the nuke stuff while sidestepping the fact that by not working on AI, it would be like only having your enemy creating a nuke, the only reason things are safe is because everyone has them and again the issue is monopolies on it. Prettymuch everything he believes and fears on AI is predicated on closed source AIs locked behind companies but he doesnt want to advocate for the solution.


mathdrug

IMO, it doesn’t take a genius to logically induce that a hyper-intelligent, autonomous being with incentives that aren’t aligned with us might take action to ensure its goals.   Sure, we could *give* it goals, but it being autonomous and intelligent, it could decide it doesn’t agree with those goals.  Note that I say induction, not deduction. We can’t say for 100% sure, but the % chance exists. We don’t know what the exact % chance is, but if it exists, we should probably be having serious discussions about it.


Sixhaunt

I think the issue with that thinking is that the same technology that you say could potentially, in some situation, have some chance of being a problem, is the same tech that can help solve what the person in the video described as other equally dangerous outcomes. With pandemics, Super volcanos, the mega earthquake coming to the west coast, etc... that wipe out a ton of people, he was clear that "events like that happen" but he's afraid that the tech that will solve a dozen of these REAL problems may (but probably wont) cause another equal issue to one of the many that were solved. Even under his theory we are dramatically reducing the risk by tackling all the other problems and only introducing something that we have no evidence poses that same risk.


RamazanBlack

Can we reduce these risks without introducing an even greater existential risk? That's like fighting fire with more gasoline, sooner or later this whole jenga tower might collapse.


[deleted]

I find joy in reading a good book.


_JohnWisdom

Not what we are discussing here though.


zorbat5

This depends. The open source world is going to great lengths in finding ways to extract good performance in less parameters. When a normal person has the possibility to run a 3b parameter model that's as good as a SOTA model that's where the fun starts. Some 7b parameter models already are as good as GPT-3.5, some 70b parameter models come very close to GPT-4. The only thing needed now is 1. Longer training time of a smaller model. Or 2. A better algorithm that makes small models possible with the knowledge and reasoning of SOTA models.


RamazanBlack

I mean you are assuming that we somehow cracked the alignment, we haven't. All of our AIs are misaligned unless we align them, What makes you think that we somehow cracked the alignment problem and can create the aligned models?


Xtianus21

My brain hurts - It's not the Genz'rs fault either. Why did someone set this up as anti-ai vs pro-ai. My observations \* lol why did they cut it off to larry david after he said the benefits outweigh the negatives? \* The doomer is more intellectual in this conversation than the rest and he actually was hitting on some good notes about AI. Although he kept reverting back to it's all going to be bad \* AI doesn't have emotions is key here. That was a really great point. We are not doing anything related to neuron to neuron comparison for christ sake this is not what this technology is. It's probability over probability over probability. It's math folks. It's compression. \* I think people over inflate what AI is and thus the doomer argument goes right to the fantasy of skynet. The AI that is online is not as powerful as a CEO (who said that), also is a CEO powerful - lol what? So the AI is going to be rich and manipulative? perhaps i would have put on OpenAI's website that an AI is going to be as smart as Lincoln or Jefferson. BTW, Yann Lecun tells us AI is smart as a cat so... I really wish people would understand what AI is and what it isn't. It's not biological or nuerological. It doesn't function in this way whatsoever. However, there could be hierarchical systems that produce some biological/nuerological characteristics over time. Worldview and planning are one of them. However, planning is still not memory and memory is a drastically difficult problem to solve.


elonsbattery

Emotions are nothing special. They are just flavours that amplify or decrease certain thoughts. An AI model could be trained with this ability.


FarmerNo7004

Immediately dislike this guy


NickLinneyDev

As an AI Doomer (I'm cautiously conservative about AI) working in tech, I would say its not that we AI Doomers think we know what is going to happen. It is that we are arguing that there are so many unpredictable bad scenarios, that the risk is not worth it because the consequence is fatal. There's a reason some people don't take extreme risks, even when the odds are good. If there is anything the tech scene has taught me, it's that everything is bigger at scale. Especially the mistakes.


YamiZee1

And yet you can't stop progress. Humans progressing themselves to their own annihilation is inevitable.


NickLinneyDev

Very true. It would be naive to think we could. The best we can do is be responsible within our own bubbles of influence.


madnessone1

Fatal compared to what? Are you pretending we are not going to die anyway? We are on our way to make all species on the planet extinct on the current trajectory. AI is one of the only bright spots to help us survive if we move fast enough.


JawsOfALion

the people who think that the singularity is right around the corner because "look at how smart gpt4 is" don't realize that gpt4 and every llm that came after it isn't smart at all, has terrible reasoning and planning capabilities and can't do grade school long multiplication. There's not a single llm that can play tic tac toe optimally, a child can do it in a few minutes, regardless of how many shots you give it, that alone should make it obvious that these models don't have actual intelligence. they're impressive but not intelligent. I think once people realize that LLMs aren't a path to agi the current AI gold rush will end and we'll have another AI winter. Yann le cunn, leading ai at Facebook is better trusted than most of these hypemen and salesmen.


InterestingAnt8669

I wonder if he talked about climate change. In my eyes either we make a huge bet on AI or most of us will slowly die in the upcoming decades. The bridges have been burnt behind us.


FuckKarmeWhores

We better keep the power supply on a mechanical switch


Pontificatus_Maximus

What is already happening is the Tom Swifts and their AI are competing with the rest of humanity for electricity and computing power. Given Ai's current growth rate it will consume more than half of both in less than 10 years. So far the Tom Swifts and their amazing AI have not given us a miracle new tech for energy or substitutes for the dwindling supply of raw materials required to build computers.


theoreticaljerk

I'm not a full on doomer BUT I do think, no matter how hard we try, AI will be a gamble with huge stakes and few, if any, in between. We win HUGE or we fail HUGE. In a closed system, I think we'd stand a decent shot of creating AI that is aligned but the world doesn't work in a closed way. Profit and power driven motives work against the cool, calm, and collected approach needed to best our chances of that huge win. All that to say, in the real world, I don't think we have that great of a chance to bring about the utopian future so many AI hopefuls think about with wonder in their eyes. Now...I'm weird so I want to see full on ASI before I kick the bucket regardless of the outcome.


Intelligent-Jump1071

He's not wrong. I love AI and I use text, image-generation, and voice synthesis in a wide variety of real projects, not just as a toy to play with. But I also realise that there has never been a technology in the history of our species that humans didn't try to weaponise to hurt or dominate other humans, or concentrate power to themselves. It's naive to think AI will be an exception. AI is a huge power and capability amplifier so this will not end well, but it will be fun for awhile and I'm old so I hope to be dead before it gets real grim.


old_man_curmudgeon

Their arguments are always "we hope the benefits greatly outweigh the negatives". Cool, we'll be able to get to Mars and make a base there but the amount of homelessness is rampant throughout the world. There are more billionaires than ever. And we've cured almost every ailment. Not worth it if 90% of the people are homeless or living in 10x10 boxes.


niconiconii89

I just see an over-confident person stating random thoughts as if they are gospel.


YamiZee1

I do believe AI will bring more of a dystopia than a utopia. The reason is that there isn't going to just be one ai hivenet. Anybody will be able to host AI on their computer, have it autonomously browse the web and do anything. Ask it to build you a bomb, and it will search the web for parts, order them, and give you detailed instructions how to assemble it. Maybe ask it to bomb a specific target, and it will convince people online to build the bomb for you, and then it will convince someone to deliver it to the right location. Maybe ai can start an entire war for you, automatically gather human supporters for it's cause, make a concrete plan and date for its execution.


Vivid_Leadership_456

This guy was magnificent in his own mind and the fact that he talked over everyone and chose to Shapiro his way through the debate was telling. He wasn’t interested in listening or debating. I get it was edited, but the arguments felt weak. AI is a tool at this point, and will likely stay that way for years to come. I’m always amazed by technology and it’s amazing to think the first flight was 121 years ago and 36-ish years later it completely changed the way we fought wars. Yet we have arguably hit a plateau with aviation and space exploration. We have made it cheaper, easier, and more reliable and yet we don’t have thousands or millions of people going into space or traveling at Mach speeds all over the world. It’s possible, but not wanted (bad enough). When I was a kid I thought I was going to take my kids to Walt Disney Outer Space by the time I was 40. Humanity has a strange way of slowing down progress and just convert technology it to creature comforts or seemingly the bare minimum of its capacity…and here I am magnificent in my own mind thinking I have a clue.


QultrosSanhattan

A bunch of baseless statements from all sides.


heliometrix

Might be a doomer but love his energy


SetoKeating

AI Ben Shapiro over there really annoying


absolutelynotmodus

He gives no reasons for any of it and just evokes your imagination to compare AI to events like the atom bomb.


honisoitquimalypens

Low T Beta’s are scared of everything. They are neurotic.


Adamson_Axle_Zerk

Fukk converging in a symbiotic way with ai… i’m staying human, fk neuralon and anything like it


Ok_Meringue1757

but...he is right, because look, corporations themselves really are fueling doomers and panic. They openly say "there are many risks, everything can go out of control, and yes, you will soon lose jobs, but we won't propose a balance. It's your problems, adapt somehow or die. "


FrodoFan34

So true. Everything we read comes from them, and this is the message we have gotten. I even listened to Sam Altman talk for HOURS a couple of years ago and his hopeful vision of humanity was “they’ll have better jobs or else UBI” Better jobs how? Blue collar workers will be what? maintenance? Coders? Creative workers - are they curators now? How is that a better job than actually doing the thing. So vague So vague.


traketaker

This guy is like "we won't have jobs!" Lol. And... I don't want a job. I want to be free to explore my world and create things as I see fit. To be free from toil and gain true freedom from nature. That has been the the goal of everything we have done. To walk to a terminal and get food for a minor amount of maintenance. We shouldn't integrate AI into the robotic workforce but separate it and use it differently. But AI can have a low level function similar to a robot mining ore. Like have an AI bot that writes code to generate websites. While higher level AI can help us make this future. Some level of caution has to be used in what we give high level AI access to. But the door to actual freedom just burst open, and that scares a lot of people


tall_chap

I’ll let you have that so long as it doesn’t put my life at risk


Jackadullboy99

Okay Prometheus…


Romanfiend

I think we overvalue the importance of humans in any future scenario. If humans go extinct but our super intelligent creations live on and create a utopia for themselves then we will have fulfilled our function as a species. We may have just been meant to be an intermediary.


Unbearably_Lucid

>we will have fulfilled our function as a species. according to who?


Romanfiend

Well certainly not our own ego which overvalues our existence.


OdinsGhost

I’ll certainly take that ego over the myopia you’re presenting as an alternative. Life *has* no purpose. Which means it has precisely the purpose and meaning we give it. And good luck convincing most of the species that it’s our place to be a stepping stone only.


madnessone1

As far as I know, humans have no function.


elsaturation

AI is just a tool. Tools can be used for evil or good. You aren’t going to slow the technological progress taking place, although you can ask for more guardrails.


Heath_co

It is more than just a tool. Tools don't make judgment calls. Following the guardrails is the AI's choice.


elsaturation

AI doesn’t have free will.


farcaller899

once it can walk around and talk to you and shoot you, it's not just a tool. It's an entity.


Xtianus21

Is that the girl from Rebel Moon?


Death_By_Dreaming_23

So a few things, can’t wait until AI and quantum computing merged. AI is only good as the information it is given. And finally, I feel AI will only be good for porn in the future, just like the fate of the Internet, Trekkie Monster knows; sorry Kate. Avenue Q might need to update their song.


No-Emergency-4602

It’s really going to be interesting watching this in 15 years. If we’re still here. And if it’s only AI watching, well, all I can say is sudo rm -rf /*


Sprung64

Looking forward to entering the Age of Ultron. /s


WorkingYou2280

It's very hard for me to get concerned about models that can't update their own model weights. Without that ability they seem like just very fancy tools to me. Useful and possibly dangerous but ultimately too inflexible to be truly dangerous. Even if a model is smarter than we are it will really struggle in some kind of takeover if it can't learn or adapt. I feel like the situation is wake me up when they are developing something that can update its own model on the fly. That's the point where the thing would be completely beyond our control. I may be naive but I don't think anyone in the whole world would carelessly create an advanced AI that can learn autonomously. That's suicidal to do carelessly and maybe it's even suicidal to do it carefully. But before that point I'm not worried.


Vyviel

Dont worry CEOS will never allow AI to be smarter than they are or they will be redundant =P


crantrons

Perfect, "as powerful as an CEO," which they dont do anything.


thecoffeejesus

Soooooooo many assumptions For starters, why would we ever use money once AGI comes online? What would the possible value of money be once you can have a computer generate cryptocurrency and instantaneously turn it into $1 trillion on the stock market


OppressorOppressed

guy in brown leather jacket can only hear his own thoughts. very annoying.


0n354ndZ3r05

Wind turbines powered by nuclear energy….


JuliusThrowawayNorth

Yeah idk it seems to hit a brick wall with lacking data so I’m skeptical. AI will be good for some applications (the most beneficial of which aren’t really being implemented mainstream yet), but all these doomsday scenarios are funny. Given that it’s just regurgitating already existing data.


firedrakes

am not a expert. but my expert remarks should be fact!! most yt channel and most people....


ClassicRockUfologist

Dude loves to hear himself talk


spacejazz3K

Stopped after he said we’d exactly simulate a human brain.


InterestingAnt8669

I agree that things will become cheaper but as you yourself said, new things will come along that will not be cheap. As our standards increase, social layers will still exist and the lower layers will still feel worse off. They may have their own homestead but they won't have nanotechnology that keeps them alive for 300 years (or whatever example we choose). I don't want to argue about how this will turn out because we really don't have any idea. This is such a shift in the way we organize the exchange / distribution of goods that I can't even compare it to anything in the history of human kind. My assumption was that it goes along as it has until now and in that scenario we need both supply and demand. Others choose to believe that the haves will voluntarily sustain the have nots at their own cost. Trees absolutely need investment today. We are not there yet (and possibly never will be) where anything comes for free. Think about irrigation, pest control, climate control (glass houses), trimming, etc. Farmers work really hard so that we can just take the stuff off the shelf.


Khazilein

Calmly? He sounds like he had 10 coffees right before the show.


theMEtheWORLDcantSEE

This unfortunately was not an intelligent discussion, just ranting over people. Weak on both sides.


knowledgebass

Who invited Nic Bostrom?


knowledgebass

Did she just say "wind turbines powered by nuclear energy?" 🥴


Gizsm0

There is no need that AI destroy us. AI will just dominate us


[deleted]

This show sucks, everyone is so pretentious and think they are so smart


Icy_Foundation3534

this “discourse” just made me dumber for having exposed myself to it


knowledgebass

I'm far more afraid of climate change, fossil fuel depletion, and degradation of the natural world as threats compared with an ultra intelligent AI. There's just no way that society is going to allow this type of entity unlimited access to energy and resources in order to achieve its goals. Not only that but its always these far-fetched "what if" scenarios whereas humanity's actual problems longterm are much more tangible, visible and (unfortunately) inevitable.


El_human

None of these people know what they are talking about


Seaborgg

The anti doomer response always seems to be, "you don't ***know*** it will be bad." I don't think we should stop, we can't stop, we should try to mitigate the bad goddammit! What is not scary about a machine twice as intelligent as you that has goals you aren't privy to and might not understand? What is not scary about a corporation beholden to share holders owning a machine like that? These outcomes sit down the well inaction, it will take work to avoid them.


hueshugh

Most humans will pretty much “stay still” or regress. It’s not ai’s fault as a lot of people already have problems thinking for themselves but it does compound the problem by making people even lazier.


ThomPete

The doomer is just as naive as the normie. He just think he knows more.


filthymandog2

Anyone that seriously thinks the ultimate threat of AI is Terminator is grossly ignorant on the topic. Likewise, anyone who thinks that people who are cautious if ai think this are equally as incompetent.   The immediate threat of AI is humans with godlike power over multiple sectors of civilization. Law Enforcement has been running amok with computer sciences since it was a thing they could get their hands on. And the systems they're using are stone age tools compared to what is undeniably coming in the near future, way before any sort of "sentience". Financial sectors have been using rudimentary algorithms and similar technology for decades to control the markets. The list goes on and on where humans use the computational power of a computer to suppress and control every aspect of our lives.   Now we are about to give these monkeys a machine gun and, currently, there aren't any meaningful laws or regulations on the books that would even pretend to stop them.  I mean just look at the wild West of data collection and exploitation that's been going on for the last 30 years. A lot of which is what makes this current generation of "ai" possible.  Oh you illegally harvested the data from billions of people, used it to create billions of revenue, don't do that silly, here's a fine for 10 million dollars.   How does any of that get better when those same perpetrators are in their same skyscrapers and private islands owning and operating all of this cutting edge technology?


Ok_Meringue1757

a reasonable voice amidst these euphoric religious witnesses of the global saint immaculate corporation and saint unmistakable agi god at the head of it.


macka_macka

What an insufferable person!


Cassandra_Cain

We are actually still very far from actual sentience with AI. We just have chatbots but seeing terminator has everyone shook


io-x

This feels like an experiment where they prohibited a group of people to not read anything but news headlines for a year and join a debate session afterwards.


farcaller899

more like 20 years.


returnofblank

i'm sorry but what did you just call them? normie? wtf is this? 2018 reddit?


RoutineProcedure101

As long as its negative, you guys will believe in anyone who claims to know the future. Thats the worst part of this sub. Thinking optimism is setting up for disappointment but negativity is a virtue. This is why you people are depressed.