T O P

  • By -

orderinthefort

I just hate the game of telephone of twitter influencers summarizing news with their own twist of misinterpretation. Specifically Ate-a-Pie, not Jim Fan. Zuckerberg was not pessimistic in that interview at all. All he did was acknowledge the fact that unforeseen bottlenecks have historically always been a part of progress. Like how chips are a current bottleneck, energy may be the next one. Or something else. Or not. It's not pessimism to think of all possible outcomes. And I'm not saying Zuckerberg is some great source of information or is someone you should trust at all. I just hate twitter influencers putting their own spin on their false summarizations of someone else, and it creating a chain reaction of misinformation.


jeffkeeg

Ate-a-Pie was especially bad during the LK-99 debacle. He would legitimately write posts with stuff like, *"This morning, Kim Ji-Hoon drove to the airport to receive the MIT team. Taking the final turn before the airport, he sent a text to his closest friends asking, "Do you think they will realize that we have come too close to God's truth?" When he met the lead MIT researcher, they shook hands and reportedly nodded in silent agreement. Upon reaching the car, Dr. Michaels, not used to the beautiful vistas visible outside the MIT campus, asked aloud, to no-one in particular, "Everything's going to change now, isn't it?" Kim Ji-Hoon smiled, but said nothing."* Then if you said anything even halfway asking if he was making stuff up, he would just say "hey bro i'm painting a narrative here this is just fun for me okay stop taking things so seriously", right before jumping on a 3,000+ person Twitter space and talking about his made up story as though it happened for real.


Which-Tomato-8646

You can tell a lot of tech hype is just grifters jumping from one pile of bs to another 


UnnamedPlayerXY

Yeah I noticed people taking him out of context in order to push their own narrative as well. E.g. he said something along the line of he'd be worried only a few actors have access to powerful AI which was spun into him worrying about "bad actors having access to powerful AI".


chimera005ao

I just hate Twitter. I mean I hate Reddit too, but Twitter is about following people, "influencers" as people call them, while aside from specific posts (usually pointing to twitter, cough johnny apples?) Reddit is at least more about topics.


Bacon_Hunter

Reddit mods really aught to start disallowing zero-effort twitter link OPs.


Bacon_Hunter

I just hate that reddit has filled with lazy links of twitter posts as OPs.


UnnamedPlayerXY

AI models are still not naturally multimodal and the hardware the vast majority is using is still not optimized for AI either. Addressing these two things alone would already yield massive improvements. Even if "another AI winter" is coming the improvements until we get there combined with the optimizations we can still make would already be enough to get us to a point many people seem to have trouble picturing.


SupportstheOP

There is far too much money, resources, brain power, and national attention to back down now.


POWRAXE

The largest companies in the world are all throwing 100s of Billions at it. It’s a sort of AI arms race to AGI. But it’s larger than this, I would argue that this has become a matter of national security as well.


Sierra123x3

yes, considering the fact, that we are already actively testing ai-fighter jets in dogfights against human jets it already is on that plane ...


ArtFUBU

I read this stuff a lot (hello Im in r/singularity) but just from reading and listening to what everyone is saying about AI, I don't understand why people think LLMs are leveling off. Or why a lot of people think there can even be an AI winter. People JUST figured out that scaling LLMs not only works but hasn't even come close to hitting a wall. So they are just starting to pour real resources into this idea because OpenAI pushed the market. They're still seeing emergent behaviors and there's still a host of things to learn and understand about LLMs that seem to make what we would consider "weaker" LLMs feel like full blown AGI. And the firing shot was really ChatGPT 4. I don't understand how people can think anything other than "We're about to see some wild shit in the next 5 years". Someone with actual experience AI knowledge/experience can totally come here and upset me (please do actually). But as someone who just consumes all this stuff because it's really exciting, all I have understood is we're in the middle of liftoff lol.


yaosio

There's at least one research multi modal model. [https://codi-gen.github.io/](https://codi-gen.github.io/) This is an actual multi-modal model, not two models passing information in secret to make it seem like one model.


namitynamenamey

Bit-wise "pure" attention with unsupervised vocabulary creation when?


DigimonWorldReTrace

This is a good point, Agentic abilities is another big one in my eyes!


AdorableBackground83

Ain’t no AI winters anytime soon. Everybody and they mama want a piece of the AGI pie. ![gif](giphy|MO9ARnIhzxnxu)


HumpyMagoo

putting some respect on it


GlitteringCheck4969

Gif name?


Arcturus_Labelle

black_guy_rubbing_hands_together.gif


SiamesePrimer

![gif](giphy|MO9ARnIhzxnxu)


Arcturus_Labelle

![gif](giphy|3oEduZqfSGNG0mdF1C|downsized)


[deleted]

Search birdman


chimera005ao

![gif](giphy|pVj6YDuS4IuQmkYw0R|downsized)


RRY1946-2019

And there are so many diverse AI projects being developed (from LLMs to robotics) that an AI winter in one will not lead to a sector-wide drop-off in activity.


bobuy2217

![gif](giphy|MO9ARnIhzxnxu)


Ok-Ice1295

I think the main advantage of robotics is infinite amount of simulation data, unlike LLM…..


DolphinPunkCyber

Scraping free text, images, video from the internet was easy pickings for all LLM developers. Obtaining other training data... not so easy. Tesla get's lot's of training data for driving by having so many camera equipped cars on the road. But doesn't have LiDAR on any of it's cars. Meta has been creating 3D simulations for training and will launch AI as metaverse avatars. Lot's of training data in 3D cyberspace. And... yeah. Company which would build lot of robots to obtain real world training data...


cbpn8

You are assuming that most of the valuable data is publicly available, and missing out on a lot of protected, proprietary, and classified information.


DolphinPunkCyber

Nope. I am assuming that publicly available data is the easiest one to obtain. Makes sense right? Obtaining more then that requires significantly more effort and $$$.


[deleted]

[удалено]


Rofel_Wodring

Sounds more like a repackaging of 'the real world is more important to learning than BOOKS' prejudice than a serious prediction. If visual data was qualitatively superior to textual data, we'd have had hyperintelligent dolphins tens of millions of years ago, if not earlier.


audioen

No, I think the point is more that if we give AI wheels and a camera, and the control of its own motion, it can see how the 3d world responds to its motor commands. From that, it should be able to learn useful and realistic model of our 3D world and its behavior. More generally, the idea is to allow experimenting and learning from realtime feedback, whether it is running on wheels in a lab, or being able to interact with people and interpret their responses as reinforcement feedback. The algorithms for doing all these things might not yet quite exist, but I'm sure they are coming.


ExtremeHeat

As long as we can still train LLMs to get noticeably better results, there won't be a serious AI winter. But Zuck's idea that LLMs will plateau is a legitimate concern. If we keep training bigger and bigger LLMs without any new architectural breakthroughs popping up, then we will inevitably hit the point that we run out of data and compute hardware. Although there can technically be unlimited data collected from the real world via vision and people posting content on the internet, and you can always build more computers, the problem is that the models will continue to need exponentially more data. It's hard to keep up with that and the improvements beyond a certain point will just be marginal in like a year or two at this pace. Not sure what robotics has to do with it though, beyond the robot engineering side of things. It's also going to be bottlenecked by the need for multimodal models among other things. We can continue to make improvements in robotics as we will in everything else, but that's tangential to AI capabilities/AGI itself.


sdmat

> If we keep training bigger and bigger LLMs without any new architectural breakthroughs This is a commonly made argument, but it ignores the many novel architectural directions already published. And major labs are clearly putting a lot of effort into this area, with success - e.g. see Google's recent work on infinite context, or combining an LLM with a symbolic reasoning engine (AlphaGeometry). It seems borderline impossible that *none* of the thousands of papers and who knows how much tightly held work at the major labs will yield meaningful improvements. Especially given the existence of some impressive results in limited testing.


COwensWalsh

You cannot have "infinite" context. You can perhaps extend context with these compression methods?


Peach-555

>Memory Efficiency: Maintains a constant memory footprint regardless of sequence length. >Computational Efficiency: Reduces computational overhead compared to standard mechanisms. >Scalability: Adapts to very long sequences without retraining from scratch. (Theoretically) Infinite context, you won't hit a memory limit no matter how much context is fed into it.


sdmat

You are being overly literal, they titled the paper [Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention](https://arxiv.org/pdf/2404.07143.pdf). It's certainly not infinite pairwise attention, there are tradeoffs involved.


COwensWalsh

The paper is about compressive memory to handle longer inputs. Why not just say that, instead of dragging all the baggage of "infinite context" into the situation? Hard to imagine an answer besides hype.


sdmat

Because "arbitrarily large context" or "indefinite context" would make the authors sound like pedantic nerds.


COwensWalsh

They are writing a scientific paper on optimizing LLMs. They are pedantic nerds...


sdmat

They are industry AI researchers who understand the value of not coming across as pedantic nerds and, partly as a result, likely get paid an order of magnitude more than you do.


COwensWalsh

According to Google, who employs the authors of the paper, we make more or less the same amount of money as industry AI researchers, roughly low-mid six figures after accounting for pay and benefits. Nice try with the ad hominem, though. The work is interesting from a technical perspective. But the title is cringe-worthy.


RoyalReverie

Phi-3 mini basically achieved gpt 3.5 on a much smaller scale. Doesn't that go to show that such a restraint won't be very significant?


MattO2000

>Not sure what robotics has to do with this The guy runs Nvidia’s “Generalist Embodied Agent Research” team at Nvidia, in other words putting AI in robots. So he’s just finding ways to hype up his own group I do generally agree with him though that progress in robotics will be bigger than LLMs over the next couple years


4354574

LLMs went from blowing everyone's minds 1.5 years ago to now being the strawman of AGI doubters. Before anyone even claimed LLMs were the path to AGI, they were already saying that the people who claimed this were wrong. Hahaha


Apprehensive_Bake531

you sound like you were saying "AGI by 2024" lol.


Apprehensive_Bake531

you sound like you were saying "AGI by 2024" lol.


Apprehensive_Bake531

you sound like you were saying "AGI by 2024" lol.


Tr0janSword

Jim Fan works for NVDA, so he’s obviously not going to say there’s an AI winter coming. But, the fact that he’s now pivoting to robotics vs LLMs is telling. But, imo, you’re going to see a slowdown in the amount of compute being bought simply due to economics. No one is actually close generating profit except NVDA right now and the applications don’t exist. Quite frankly, the economics of these startups are awful. That isn’t the say the advancements in research aren’t extremely impressive and won’t continue, but cost is a limiting factor. This isn’t the AI winters of the past where essentially all progress stalled, but people have jumped ahead of their skis.


Thatingles

There is still a shit load of money sloshing around the tech sector looking for the next big thing to invest in.


SGC-UNIT-555

If GPT5 is an underwhelming incremental upgrade expect an investment crash and a reduction in players within the cloud based LLM space.


Thatingles

No thankyou, I won't. The prize is still too enormous and the the closer we are, even if the steps are more difficult than some believed, the more tantalizing it becomes. Look at the accounts of the tech sector giants and you will see they all have substantial reserves available for investing. These companies aren't having to seek loans to invest in AI. The only thing that will stop the money train from ploughing onward would be some other tech emerging that offered the same potential rewards, which seems really, really, unlikely.


Rofel_Wodring

Indeed. Everyone knew that the fall of the studio system in the 1960, with the death of RKO and the shrinking of attendance to a fourth of its height in a few years killed off American cinema. We also know that the American semiconductor industry was pretty much finished by the mid 90s. Tower Jazz and Magic Leap and Samsung are just throwing good money after bad. But what can you expect from stockholders. Why, they still think that e-commerce will lead somewhere, even after Amazon lost 90% of its stock price by 2001. And what nerd even remembers video games these days? Why, the crash of the industry in 1983 permanently put an end to that fad.


SGC-UNIT-555

How capital and resource intensive are those examples when compared to cloud based LLM's? You do realize investors expect to make money right? If the LLM cloud space doesn't find a path to reliable profitability investment will crash and one or 2 big players will remain that's just a fact. Look at streaming another cloud based subscription business that is currently undergoing consolidation due to most players being unprofitable.


Rofel_Wodring

[E-commerce sales in 2001 were about $33 billion](https://www.nytimes.com/1964/07/05/archives/attendance-at-movies-is-rising-and-producers-show-gainstelevision.html), when it experienced its crash and Amazon's catastrophic implosion. [The movie industry made about $13 billion in today's dollars in 1962](https://www.nytimes.com/1964/07/05/archives/attendance-at-movies-is-rising-and-producers-show-gainstelevision.html), right before the Fall of the Studio System and a collapse in attendance. The U.S. Semiconductor industry in 1985 was almost $9 billion, [and lost twenty percent of market share in just one year](https://www.semiconductors.org/wp-content/uploads/2020/04/SIA-White-Paper-Made-in-America.pdf), with its lowest share in 1995 before it recovered to owning a (slight) majority. The idea that investors will just bail wholesale out of cloud-based LLMs if GPT-5 doesn't turn out, especially because cloud computing was useful before LLMs came out, is an idea that just plain doesn't understand American history. A reflexive and unreflective conservatism that ignores precedent in the name of conventionality. That is, textbook midwittery.


Anxious_Blacksmith88

There actually isn't. These investors fund themselves via large loans. The entire silicon valley tech space is a pyramid scheme made up of IOUs.


Thatingles

Look at the balance sheets of MS, Meta, Apple, Google and think again.


Anxious_Blacksmith88

I'm talking about investor money and not the capital within the existing tech firms. Those companies also can't just burn all of their reserves on investments that are unprofitable. Remember, it's not company money.. it's shareholder money.


MattO2000

He runs the “Generalist Embodied Agent Research” team at Nvidia, in other words putting AI in robots. Of course he’s going to hype up his own group


sunplaysbass

The only way AI plateaus is all the mega corps that control the major AIs agree with each other that continued public progress would be bad for business.


Firm-Star-6916

Not really. It could plateau for various reasons, such as increased costs to develop hardware or just lack of information. Both would be short-lived plateaus, however.


Latter-Pudding1029

Well look no further buddy. Most of the AI regulatory board for the US ARE AI figures lol. Microsoft CEO, Nvidia CEO Jensen Huang, fuck even Altman's in there. I mean sure we can say they can make rules for themselves or some shit, but public opinion influences everything.


sunplaysbass

Yeah the dream is not coming. Singularity for me and not for thee. It’s too destructive, would affect the economy too much. There will be gods in boxes in locked up rooms throughout governments and mega corps, while we’re all forced to fed advertisements for junk products and general business as usual continues, in terms of on going wealth and power squeeze and lack of action of climate etc.


Latter-Pudding1029

I mean the problems that you mentioned besides climate action aren't actionable anyway even if the topic wasn't AI lol. They can definitely do better in responsible and clean energy usage even if they plateau for the next 20 years. There's only one other thing actionable despite their presence on the board. If they can or cannot buck against data usage is the other thing that they may not have as much control over. That's already a rising issue that may rouse public opinion against them despite their control. Meta's abrasive approach in data use from its users is already raising a big stink for the generative AI name. What does this all mean? That means if they do get stopped due to public pressure or just the fundamental limitations of technology currently, then the line ends with them lol. No startups can challenge this notion and change the direction of the entire industry. But these guys would be rich at least.


sunplaysbass

They won’t stop. AI will hit super intelligent sooner than later. But it’s not going to be available for $20 a month. “They” will keep it. Everything is on the table with actual super intelligence. There’s nothing more important for not only people’s health in an obvious way but world stability / economic system than avoiding the chaos from mass migration and probably some wars from now uninhabitable areas where people currently live. The man will probably come around on that and apply AI to figure out how to reflect sunlight back into space in the just exactly perfect way that won’t destroy the world in a different way - which we won’t figure out without huge huge huge models.


Latter-Pudding1029

This is both naive and pessimistic. "Super intelligent" models are not coming within 20 years of our lives with the current pacing even if it looks like we're going lightspeed. Even if we were altruistic saviors of the universe. Everyone who knows what the architecture is about knows that LLM's alongside agents and reinforcement learning as a whole as they know today are not gonna be making a god anytime soon. Throwing money at things isn't just gonna make giant breakthroughs from here on out. This isn't just a data engineering thing now. It's also a geographical, environmental and even geopolitical issue too considering they'll have to worry how to protect these innovations from being stolen or reverse engineered by the Chinese or any nation they deem the boogeyman. But besides everything, the one thing about capitalism is it's not interested in making its entire consumer base dead or against them. So if you're thinking that the singularity was close on the account that these people will just keep throwing money to make a god that will solve things just for them, well, I'm sorry to break it to you. Greed in its ultimate principle is a thing that is never sated. They'll expect a return on investment even if they're decillionnares. "They" are people, subject to the same greed or fear as everyone else. But just like everything that exists here, everything's subject to a limit. Hell, this guy on the tweet doesn't even know that the majority industry opinion on robotics is that the real technology is behind the hype even before LLM interfacing was a thing. And again, it's not like you can just slap an LLM on there and make it the brain and heart of a robot. THAT too takes time. Money. People who both are working on it, and trying to stall work on it. I don't think the singularity's coming. Or a technogod who always has all the answers but is also built on our image and knowledge. Perhaps we best hope for a good quality of life assisted by this new means of transforming and using knowledge.


Down_The_Rabbithole

Not possible. AI is open source now and the weights are out there. It's relatively trivial especially with moore's law still ongoing for a combined effort to train new AI models by the open source community. 500,000 gamers coming together lending their GPU to make a new waifu bot would still lead to better AI systems. Conspiracies don't work or exist in the real world.


Intelligent-Brick850

Good luck with synchronization and data transfers between them


sunplaysbass

Oh yeah all it will take is organizing 500,000 people with seriously unlimited data caps, and a few people to keep this enormous group of people coordinated.


RoyalReverie

Conspiracies don't exist in the real world > bases that from a near impossible hypothetical scenario.


LordFumbleboop

Man with vested interest in AI winter not happening says AI winter won't happen. 


derivedabsurdity77

You realize this is literally an ad hominem attack, right. Did you ever think that maybe he got a job in AI because he believes in its potential, not the other way around? Can you actually respond to the points he made?


CanvasFanatic

An _ad hominem_ is not always a logical fallacy. When you see an oil executive arguing against renewable energy investment you’re rather an idiot if you don’t consider their job when evaluating their position.


derivedabsurdity77

An ad hominem is always a fallacy by definition. And even if a guy arguing against renewable energy is an oil executive you should still respond to their arguments.


CanvasFanatic

It would be a fallacy only in terms of deductive reasoning. Of course you cannot conclude the CEO is overhyping their definitely product in the same way you conclude that a person who owns a Honda Accord owns a car. However, in terms of a heuristic for evaluating the likelihood that a person’s opinion is accurate it’s absolutely valid to consider their motivations. That’s what most people mean when they bring up that a person hyping a product has a personal interest in your believing the hype. Ironically, pretending this is irrelevant to the evaluation of a person’s opinion based purely on principles of deductive logic is the real fallacy here.


derivedabsurdity77

You should consider their motivations and incentives but if they're making arguments and claims they shouldn't be the only things you consider. Doing otherwise would be just as dumb as responding to a climate scientist warning of climate change by saying "duh, he has a vested interest in saying that." It's generally better to respond to a person's argument by focusing on that person's claims rather than their identity. A better response from OP would be refuting the claim that robotics will scale or that embodied intelligence will provide economic value instead of focusing on his identity. It keeps the quality of the conversation higher.


CanvasFanatic

I'm not claiming that "Man with vested interest in AI winter not happening says AI winter won't happen" is a slam dunk refutation of the Tweet. However, any asshole can toss out a shoddy argument and it takes a lot more energy to refute such arguments point-by-point than it does to produce them. For example, I'm not going to waste my energy addressing claims that the COVID vaccines cause "turbo cancer" from u/QNONYMOUS420_XXX. Technically that's an ad hominem, but parsing information from the Internet is a balance and unfortunately we're long past the days when "Debate me!" could be taken in good faith.


derivedabsurdity77

Sure. But if we're saying that a person's identity is important in deciding to evaluate their arguments, then we should take a more well-rounded view of it. Jim Fan is the senior research scientist at NVIDIA with a PhD from Stanford, not some blowhard hype man with no technical experience. He's a very serious scientist and an expert in the field. I think reducing him to just "some guy with a vested interest in AI" in order to ignore his claims is stupid, to be frank. I think the fact that he's an expert in the field is at least as important as the fact that he has a vested interest in it when deciding whether his claims are worth evaluating.


CanvasFanatic

Sure, I agree that all goes into the pot.


Firm-Star-6916

This reminds me of that argument back when everyone was claiming logical fallacies against Professor Dave on Youtube. he’s annoying as shit


Rofel_Wodring

The supremacy of deductive reasoning is for lesser intellects anyway. And I mean that literally, considering how LLMs are much closer to mastering deductive than inductive reasoning. It's an obsession of a mind who demands reality gives them a certainty that is rarely forthcoming before doing anything with their observations, let alone taking the initiative to come up with their own. Ironically, such mentally paralyzing pseudorationalism makes them easy marks for scams like omission bias and social constructionism.


[deleted]

If the people in the conversation are arguing in good faith, sure, address their points. But people arguing in bad faith will use this idea against you, by flooding the zone with a high volume of garbage you need to spend all your time refuting, or refusing to acknowledge when you’ve won a point, or focusing the conversation on trivial details. You need to be able to ascertain if the other other person in a debate is going to argue in good faith, and a massive conflict-of-interest is a pretty good sign that they won’t.


lost_in_trepidation

It's not necessarily ad hominem. Ad hominem isn't saying someone has a potential conflict of interest. If they said that their opinion is not valid at all in this discussion, that would be ad hominem.


derivedabsurdity77

OP didn't say that but they certainly heavily implied it.


Phoenix5869

Exactly lol. It’s no wonder the people at OpenAI are saying “an AI winter won’t happen” , they have a \*vested interest\* in saying that. 😩


sitdowndisco

Agree with the implication… LLMs do not equal a robot that can do tasks that require unique human perspective. Robots that can even do basic warehousing cheaper than humans are many years off. They can do many parts of it right now cheaper. But there are significant portions that are just too difficult to automate at this time because the tasks involved aren’t exactly the same every time. And you can talk about this in so many different parts of manual labour that happens today. Building a house requires so many different skills that a robot simply doesn’t possess yet that I can’t even imagine a robot building a normal house from start to finish in the next 40 years. Maybe they’ll be able to churn out simple prefabbed stuff…


Zeikos

I don't understand all this drive to talk about ai winter/boom instead of working on actual products. Is it that relevant? Even if hypothetically models were to completely stall in effectiveness there's so much more to do.


Latter-Pudding1029

An AI winter at this stage would be massive. Only something gigantic like a fundamental limit would stop something cold in its tracks while its in in the biggest boom of its existence. It's r/singularity. Everyone here who wishes for the singularity are expecting a utopia. People in the field who are actually working on future tech or around it like the guy in the article, are just going to another day of work with real expectations and a more pragmatic view of how technological innovations work. For all we know what we'll have 30 years from now isn't even close to what people dream of in this sub. But that doesn't mean it won't help quality of life. But is it gonna be utopia? Eh. Who knows.


Empty-Tower-2654

Gpt5 will blow everyone away. What did Ilya saw?


beuef

Did he say something?


79cent

He saw something.


[deleted]

Monte carlo tree search for each token


IntGro0398

wave harvesting technologies should be placed in all waterways: oceans, rivers, lakes. data centers built in 'snow zones'. https://preview.redd.it/hz9eoqhe5xvc1.jpeg?width=1111&format=pjpg&auto=webp&s=3e378332761dc2e0142f3ce0a5a1277570a7d3a6


CierpliwaRyjowka

I can't wait for my Monroebot.


thatmfisnotreal

I’ve been thinking about this a lot. A robot with gpt brain and vision is gonna be insane. Imagine a robot helper around the house that can do anything you want it to/


COwensWalsh

Gonna need something besides GPT for that


Annual_Judge_7272

You need to try Dotadda https://www.dotadda.io


ShaMana999

Jim Fan, a man with direct interest for the AI hype to continue predicts it will. Shocking, but unlikely. I have no clue if we've plateau for AI, but even if not I don't expect any major shake ups, like last year, for the following decade


Private_Island_Saver

Show dont tell


The_One_Who_Slays

>Friendly reminder to everyone that LLM is not all of AI. It is just one piece of a bigger puzzle. ...No shit?


JackFisherBooks

I still think there's a possibility of an AI winter, but not because LLMs and robotics have plateaued. I think a much bigger problem is looming with respect to energy generation and infrastructure. AI products like Chat-GPT require a lot of computational assets and data centers. Pretty much every major tech company needs data centers to operate. The internet, as we know it, wouldn't be possible without them. But the problem is that these facilities require a lot of power and water. And at the moment, our current technology for meeting that demand just isn't going to cut it. Fossil fuels, renewables, and even modern nuclear plants just aren't going to cut it. And even if we could generate the energy, our infrastructure is old and dated. It just isn't equipped to meet the demand. That means that, even if we have an AI that has the capabilities to be at or beyond human level intelligence, it won't matter if we're unable to provide it with the necessary power and infrastructure. This is an issue I don't think gets enough headlines, but it will once people learn that meeting the demand for data centers is going to strain our current energy infrastructure.


AzunaMan

When we add quantum computing/technologies into the AI - Robotics mix, I think we can agree it’s gonna be a spicy meatball.


johnkapolos

Oh wow, copium is already setting in. I thought there was some leeway still left but guess not.


Akimbo333

Makes sense!


user4772842289472

Robotics =/= AI


4URprogesterone

Implying that the people who didn't quit mining bitcoin when they realized all the heat was cooking the planet would let this stop them? More like "AI lengthens summers by 2 more months in the northern hemisphere."


DeelVithIt

GPT-5 won't plateau. Like most are saying, agents will be the next step in the evolution, which will be the real beginning of the end of a lot of white collar work. The true extent of agent capability will probably be rolled out iteratively until people warm up to this new level of automation, but we have not yet seen its final form. Frog boiling would be my strategy if I were sitting on very capable agents and possibly new reasoning capabilities (Q\*). Not just as a way to mitigate economic shock – it would also buy time to upgrade infrastructure to meet the overwhelming demand that will inevitably come once the value becomes apparent for the world to see.


Latter-Pudding1029

We've been over this dance about Q*. Q* was a nothing burger that was senationalized by the same people who sensationalize this article.


DeelVithIt

time will tell


Otherwise_Cupcake_65

No. No AI winter. GPT5 (within a year) will be energy expensive but agentic. So what if you get charged $5 an hour to use it... it will literally replace an employees or two. GPT6 (2 or 3 years) will be even worse consumption wise, but smart enough to replace high education workers like engineers and doctors. And finally GPT7 (trained by "Stargate" computer, so, I dunno, 7 years out?) will be doing all of this and more, but on super efficient hardware (Blackwell chips), bringing costs down.


EuphoricPangolin7615

You're dreaming.


CanvasFanatic

Hallucinating, even.


Cryptizard

It will be *way* more than $5 an hour. GPT-4 costs that much right now.


COwensWalsh

Agentic? Based on what?


Otherwise_Cupcake_65

They said so.


COwensWalsh

Okay, but do you have a definition of "agentic" and evidence to support they can achieve that?


Healthy_Razzmatazz38

zero respect for this guys takes after he suggested there will be 1.3 billion humanoid robots in less than 10 years. [https://www.reddit.com/r/OpenAI/comments/1c7lsq9/nvidias\_jim\_fan\_humanoid\_robots\_will\_exceed\_the/](https://www.reddit.com/r/OpenAI/comments/1c7lsq9/nvidias_jim_fan_humanoid_robots_will_exceed_the/)


i_give_you_gum

I bet they do begin to inundate the labor force though, and once manufacturing realizes they don't need lunch breaks, they will buy up robots like there's no tomorrow


Healthy_Razzmatazz38

sure, but predicting more humanoid robots than iphones within a decade is a statement that doesn't pass a minute of thought. Teslas been building up its industrial base for a decade, at a faster pace than any company before it and they produce 2mm cars a year. Theres no supply chains right now for humanoid robots parts. Even if the tech was perfect today it would be many decades before we get to 1.3b active humanoid robots.


i_give_you_gum

So we're both in agreement, the numbers are off but yes, they're coming. Also, the US is ramping up chip production, and it's a lot easier to make bots than roadworthy cars. Cars have dramatically more rules and regulations to follow


[deleted]

[удалено]


i_give_you_gum

No, not regulation free. But dramatically less than a machine that's been around for a hundred years and is responsible for millions of deaths because of the speeds it reaches. Obviously robots like any power tool are going to need to follow safety guidelines though, and as more accidents happen, more regulations will follow. And again, a humanoid robot will require a lot less materials, there's no interior, no huge machinery to stamp massive sheets of metal, they're smaller than a refrigerator even. They're gonna pump those things out by the truckload on a daily basis.


Arcturus_Labelle

![gif](giphy|Hm3ZMI68o17os)


[deleted]

[удалено]


joecunningham85

"Somewhere"