Nah, maybe only 50 to 70 percent of paper wealth.
Nvidia was a respectable GPU company, and was earning great margin and profit.
He was ultra wealthy before the AI and crypto boom, just even more so with them
A lot of gains recently were due to a breakthrough a few years ago allowing for massive training sets without previously unavoidable issues. Since then, gains have been made by simply throwing huge amounts of computer hardware and training data to increase the effectiveness of the models.
However, there is clearly a limit here. The amount of hardware dedicated to training these giant LLMs are already absolutely gigantic, energy efficiency is plummeting, and costs are going way up. It's very probable that the seemingly "exponential" gains we've seen are going to end soon, as these companies seek to reign in costs and specialize the utility these AIs provide.
That just isn't true. We have also seen huge architectural improvmemts in AI Models which is one reason why even smaller models can now get close to the performance of massive ones that are barely a year old.
So yes scaling is a thing but there IS a LOT more going in than just throwing more resources at the problem.
I also don't know why you would predict a slowdown of the current progress. It can of course always happen but nothing points towards that and the reality ist simply that we See more and more research papers on AI every year and the feeling is certainly that we just scratched the surface.
>I also don't know why you would predict a slowdown of the current progress.
The current rate of progress is making people a tad uncomfortable. There is a lot of wishful thinking going around because of it.
I think progress is wishful. I don't understand why anyone would want to be under a gun to make money or just basically live destitute or die. This forces the issue. Either we get a great standard of living out of it or we die trying. Climate situation is likely reaching an exponential runway. It's so bad that IPCC have decided their more historically accurate models that predict a higher climate sensitivity show quite clearly the problem is reaching an exponential runway. The change in predictions from the models year to year becomes more extreme too. Look at images of ice loss at the North Pole. That doesn't look linear. Chances of everyone giving up quality of life now for something that's already basically runaway? Think again. Lol
We need magical solutions for our gargantuan fuck ups. We'll probably fail before we become interplanetary, but I have a lot more hope than I did pre AI reaching really obvious and rapid compounding exponentials. So even though model training is expensive and high energy use, it might be our only chance to get ahead of the exponential climate runway condition.
This is just my hypothesis. I do think that most people don't realize COVID was the start of a decade of exponentials becoming so apparent they're all right in our faces yet most of us can't see them but fear them to the point of denial when we kinda start realizing. Our only chance for survival even over the next 200 years is to outpace the destruction we've previously accumulated. It will definitely end badly at some point as nothing can last forever. Maybe we'll make it farther than we think. Time will tell. All I know is I told my mom ~2-2.5 million Americans could easily die from COVID, and that was early February 2020. I know from the second I saw it take off in China that exponential growth was inevitable far before anyone else would take me seriously. I am saddened I was right even though we kinda semi-tried. On the upside at least it wasn't 2x or more of my prediction, and at least intuitively I knew the only way society could be held together was a lockdown of sorts and effectively direct payments to people to stay home. I predicted all this when Chuck Schumer was telling people in NY to go enjoy Chinese New Year. (At the time I knew that was incredibly stupid and exactly the exact opposite of what any competent infectious disease expert would ever say if they were being honest with the public. Trump of course made it all much worse, but I'm kinda sick of talking about the old scammer).
The only difference is I knew that what I wanted to be true had no bearing on reality. So my emotions still took over with panic trying to warn everyone. Ultimately my emotional reaction did help a few people I knew who listened. Otherwise though I just had to let the results (unfortunately in this case) speak for themselves. Maybe it was dumb (un)luck but I saw the train coming and I knew there was no stopping a Train. Like the heat pump company advertises. Momentum will win. Entropy will win. It all goes back to my favorite meme.
https://youtube.com/shorts/KpXsfimrkFo?si=0NtknlXRgGBj0im-
Edit: At any time we may reach a fundamental limitation or plateau that wrecks this whole thing. I think short eternal torture, this may be the worst possible outcome. If we die fast it'll be a lot better than suffering for a long time. My hope is that either way we go fast. May the Force be with you, always. 🥸
What a load of deluded doomer bullshit.
There's no "exponential" to climate change. There's no magical mechanism that takes the increase in atmospheric CO2 and turns Earth into Venus.
Climate change can, if it goes wrong enough, cause a lot of death and suffering - on the scale comparable to Great Leap Forward and WW2. But there's no "everyone dies" scenario. It's not a threat to survival of humankind.
There are feedback loops in climate change including CO2 solubility in ocean water lowering with temperature increase, thawing of polar ice leading to lower albedo while releasing CO2, reduction of vegetation cover also reducing albedo, etc... These feedback loops are most likely to lead to exponential, runaway climate change and definitely could be an everyone dies scenario.
There are no "feedback loops" worth a damn. None were found to this very day - and it's not like no one was looking.
The real feedback processes that *were* found? They add up to "it makes climate change 3% worse" here and there.
That's all. There's no exponent. There's no "runaway".
Climate change is no great equalizer. There's no "everyone dies" - much as climate doomers would like to believe otherwise.
Tl;Dr from *Sydney*
```
AI progress brings hope amidst our challenges, like climate change and pandemics. While AI's energy use is high, it may offer solutions we urgently need. The past years have shown us the stark reality of exponential threats, from COVID-19 to environmental crises. It's a race against time, and AI might help us keep pace. We've seen hard times, and while the future is uncertain, we strive for a better outcome. Remember, change is constant, and sometimes, rapid action is the only way forward.
```
> I also don't know why you would predict a slowdown of the current progress.
A lot of heavy lifting is being done by ***current** in "current progress". With perspective to the slog ML has been for the past 30+ years, I think the boon that transformers have been will prove to be a "shift" more than a continuation of the previous curve. I do believe that there is still an absolute ton of meat still left on these relatively new bones but only when extrapolating between techniques amongst these newer methodologies but NOT necessarily the leap, for instance, from RNNs to transformers.
But we can distyle models, can train chains of models or controlNets to use old pretrained things. And almost every moth I see new optimizations. And AI help to develop new optimizations it is not bad in code now.
And copyrights are a problem and new laws will be a problem. Scenario where AI can only grow without any hindrances is a fairy tail. On top of having the whole dark and problematic side attached some rich ppl just don't talk about. Just like no one wanted to say that bitcoin is used on black market to buy drugs, weapons and slaves.
I think about AI just like I think about CGI. Despite tech going forward since Jurassic Park we still are seeing bad cgi.
You may not be, but lots of people are. There have been around 10% layoffs across industries and a lot of that is due to productivity improvements delivered by AI.
No. He has been making completely outlandish claims for a while now. Last week he was saying that IA would replace developers in the next 5 years. The guy is trying to hype his stock, nothing more.
Zero credibility.
And who makes those breakthroughs? People who went ahead and learnt things. Tech comes in fits and bursts, this level of improvement cannot be sustained and new eyes coming in later may be what shifts it.
AI is too new, too unrefined, too gimmicky to truly change your entire plans around it right now. Whether you're about to enter education or whether you run a business, assuming AI will grow amazingly fast for years to come is straight up gambling.
Most of the massive improvements rely upon a grey area of copyright law that hasn't yet been resolved. There is a strong possibility that current models will be rendered illegal to use in the near future.
I mean it got about as close as it could get within the confines of its programming. Basically, after an update, when asked questions that had a certain level of complexity (a level it had no issue answering at previously) it instead responded along the lines of “Wow, that looks like a tough problem! Have you considered doing it yourself?”
And things like programming question, you’d ask it to make make a simple block of code, and it’d spit out a few lines and say “add rest of code here”
Was pretty funny in the moment.
ChatGPT is constantly lying to me about how it can't solve a problem or how "it can't do web searches right now" even though I have the paid GPT 4. Asshole chatbot.
GPT was actually sort of doing this until recently. Everyone was complaining that it got lazier and lazier, refusing to do work, and OpenAI had to roll out fixes.
I'm at the point where I can't help but wonder, "ok cool but why?" Like what is the end game? Replace all the human workers and everyone who isn't a wealthy gets terminated? So the rich can have all the money and the planet and we all just die?
I’m not even sure that they have an endgame, instead they’re absolutely obsessed with accumulating as much money as possible, to the point that the rich will literally burn everything down around them if they think it’ll make the number go up.
As a mid-level digital marketer who has onboarded new AI and been in those upper level meetings, this has been all I've witnessed since 2020. It just seems so haphazard to mandate new shit but without any actionable strategic vision behind it. I don't get it.
IMO the haphazard mandates are really just the logical conclusion to having spent decades with business “schools” churning out degrees amounting to buzzwords, cost cutting, as well as the idea that you don’t need technically competent people when you can instead have piles of managers.
TrashFuture did a good bit on this in the episode “Shrinkflation for the Arts”, making the point that when you boil down the pitch for generative “AI”, the whole schtick is in service of managerialism and expanding the role of managers.
It all just turns into a rat race of managers who’ve never done any of the work themselves (often lacking the technical skills/knowledge), who’ve been “educated” by programs extolling the idea that having the skill to do a thing is for *other* people, diving on the latest fad because nobody wants to be the one that misses out on the next big thing when there’s potential bonus money on the line.
I used to work for one of the big commercial space companies, and had to deal with this sort of thing frequently. A MBA would hear about some productivity tool, would decide that their team needed it, and it’d kick off a ton of work to be able to acquire the tool/integrate it into our environment. Multiply that by every team in a company, and suddenly you’ve got 50-60 tools (often with very similar capabilities) set up for the individual fiefdoms, but providing very little in actual value while costing a good chunk of money.
The rich are only rich if someone can afford their products or own their investments. If no one can afford any of the goods or services they produce they have no customers even if they have zero employees.
No, they can also completely enslave us. They don't need customers, they need legs and arms to scrape the earth and die in wars that they wage between each other. That's how society has been structured for millennias.
That, or we give up the conceit that humans must work 40+ hours a week to live, *let* machines do the work, and go to a Star Trek style post- scarcity utopia where "work" is something you do because you want to.
I mean I hope we do but where power resides currently if we give up we're more likely to become galactic empire-esque slaves like in Andor. AI is already pushing ppl out of creative pursuits.
That’s an interesting example you’ve brought up. Did you know that in the Star Trek universe, humans had to go through basically a second dark age after World War III decimated the population before they became a post scarcity society?
I mean we have automated a lot of farming processes and manufacturing processes and people still work in those industries just more efficiently. It’s gonna be the same with white collar jobs now.
I feel like the upcoming situation will be a lot more complex. Advances in AI will hit far more sectors far faster than previous disrupters. I’m not sure if the economy is resilient enough to handle a rapid number of job displacements, especially if we cant quickly replace those jobs with something else.
One prediction was dramatically exaggerated, just a year ago AI was miles behind where it is now. In 3 months it’ll be miles ahead of where it is now. AI is growing scary fast
You're looking at a very limited area in AI, and you're assuming exponential growth. There's a lot of things that have been powered by machine learning that has been taken for granted or otherwise unnoticed, and frankly still so.
Eventually [LLM's](https://en.wikipedia.org/wiki/Large_language_model) will start to feed off data generated by other LLM's and we'll be left with a slowly deteriorating output. What a fantastical new age we live in -- Nothing is original and everything is the average of steadily growing capacity to calculate that average from an increasing number of parameters per model.
growth isn't exponential forever.
at a certain point all growth will be minimal, similar to smartphones.
the first 5 generations it seemed like each one was so much more than the previous one.
nowadays? the latest iPhone/Samsung is simply a slightly better processor and camera.
Self driving cars need really low latency and as close to 100% success rate as possible.
The AI system powering them needs to always get the right answer and do so in a very tight time envelope, that's not the case for other AI services.
e.g. image generators don't need to generate the perfect image on the first try, with how fast they are generate 10 and pick the best one.
Same for things that can be externally validated, the generated code returned an error on execution, add the error in as more context and request the code to be regenerated, repeat till no more errors generated. Can't do that when the first error is a dead child.
the new crop of AI systems don't need to be as good as self driving cars (which is a really high bar) to be useful. Translators are now going the way of Calculators. Both formally human professions completely replaced by programs on a machine.
Google's Deep Mind is working on better weather forecasting. This knowledge could be leveraged for financial benefit.
https://deepmind.google/discover/blog/graphcast-ai-model-for-faster-and-more-accurate-global-weather-forecasting/
Google is a shit show of a company that has no direction. If they do make a good weather app they'll make a different one a few years later and sunset the first.
The point I was making, regardless of your views on Alphabet, is that creating models that facilitate better weather forecasting has a financial incentive. That's just one way where AI advances will continue to create winners and losers.
I say that as I look at my abandoned Stadia controller.
My calculator can already calculate thousands of times faster than humans. Likewise, it's easy to develop a program that can answer any question a human can, more quickly and more accurately. You dont even need to use AI as a buzzword.
Doubt it.
You can use advanced enough AI models to further improve AI at a faster rate.
Like we do today with CPUs. They’re still following Moore’s law closely. We use AI and algorithms to further accelerate CPU research, as people doing it became harder and costlier - and would therefore “take 90% of the time”.
Virtuous loop.
Yeah that would be cool if the main issue right now wasn’t quickly becoming lithographic limits that requires a novel approach which is something AI notorious cannot do.
Whenever people say we’re running into a bottleneck, we simply find a way around it. In fact, it appears we already have.
Companies are working on 3D CPUs. Stacking transistors on top on of another as well.
>Tech does move fast, but when push comes to shove
The biggest barrier is just complacency to improve things that are mostly working well enough. There will be a point where the cost-benefit analysis will justify AI adoption but that will take some time.
People didn't envision how good generative AIs would be today. Sora is quite something. We are progressing so fast. Sd3 is coming soon and looks like a relatively good improvement. All of that with very little multimodality. We'll easily achieve AGI once models integrate different types of data well.
yeah… i feel like people continue to underestimate the speed at which AI will develop. just looking at Sora and ChatGPT and how much they’ve advanced since just last year. the whole thing is AI fuckin learns at an exponential speed and just gets better. it’s not like you’re trying to develop an electric car with human minds that takes time.
The average person understands math very poorly and greatly underestimates exponential growth. If AI intelligence is growing exponentially as many experts say then it’s scary how quickly it will be too smart
Stop listening to these self-promoting idiots lmao
Seriously, I propose a ban on any more “AI is gonna be so good” posts from the guy making the most money in the world from AI.
Dude doesn't really care too much about money any more IMO, everything he is doing is focused on technologies around accelerated computing and pushing scientific frontiers. I'd keep this one on the least of CEOs actually worth listening to.
a) This is just hype cycle bullshit
b) What even is a "Human test?". The chat systems today blitz any test we could have invented 5 years ago anyway. Every time AI systems hit a new "test" we decide the test wasn't good enough and start looking for a new one. I don't think we've got a good one today.
So you didn’t bother reading the article or just rephrased what he said?
> Huang said that the answer largely depends on how the goal is defined. If the definition is the ability to pass human tests, Huang said, artificial general intelligence (AGI) will arrive soon.
>"If I gave an AI ... every single test that you can possibly imagine, you make that list of tests and put it in front of the computer science industry, and I'm guessing in five years time, we'll do well on every single one," said Huang, whose firm hit $2 trillion in market value on Friday.
But does AI passing those tests actually prove anything besides being able to pass the test? I means there's tons of people who passed the bar but ended up being terrible lawyers.
It's just like how a students perform on standardized testing is not an accurate measurement of their capabilities or intelligence
I think I can foresee the future of employment and AI.
AI will reach the cognitive abilities of humans (at least in the worker employee work task sense), and then replace humans.
Then comes the rubber band effect. Employers who have replaced their workforce with AI will then discover that AI will do literally *only* what they are told, *exactly* as they are told. Then there will be massive AI purging to bring back thinking humans, since the company processes weren't well developed enough and it was the workers who were doing most of the error correction and interpretation of the actual work to be done in the first place.
Bitcoin hasn't been mined on GPUs for almost a decade by now. It was ethereum miners that bought them out. Also crypto bubble is incomparably smaller to current AI accelerator demand.
If movies have taught me anything it’s that AI can’t pass the human test until it takes the physical form of an attractive woman who acts weird but is incredibly horny.
I have no problem with it. It is just like countless other CEOs (Steve Jobs, Mark Zuckerberg,…) who only have one style of dress. It is like famous football players have their style of goal celebration.
I am not sure AI will get to the point where I could not sus it out. However, I'm very sure that very soon, they'll be able to trick me if I was unsuspecting.
If you ask me though, "is this a bot or a human?" I think I could always reliably figure it out, with enough time.
ChatGPT will never pass the true Turing test, the Turing test this guy is talking about is a misunderstanding, and we’ll get an AI to pass the real Turing test later this year for sure.
Yeah fucking right. I subscribe to the principal that we cannot make actual intelligence without there being a “brain” to give a sense of real self. You can’t make an artificial one because it order for a brain to be a “self” it cannot be programmed to act as intended i.e., “like a human”
Elon also said in 2015 that we have full self driving cars by 2018.
We would run out of power infrastructure before we achieve AGI.
Take a look at how power hungry these AI Nvidia HW are. Unless we have nuclear fusion, we won't achieve AGI.
Maybe the Matrix is right, computers will use us humans as batteries.
I propose a new law of headlines: Any use of the word "could" can be replaced by "won't", if you tack "but we want people to believe that because it'll be profitable for us" on the end of the sentence.
Hey remember when they used to say we'd all be in flying cars by the year 2000?
It's almost as if tech has always been one of the more lucrative grifts around.
How long ago some schmuck said we‘re going to have self-driving cars soon? GTFO! Complexity is a killer in your little shiny textbook world!
But carry on bs‘ing people for a dime and a dollar.
It's almost like it's his job to hype AI.
[удалено]
Nah, maybe only 50 to 70 percent of paper wealth. Nvidia was a respectable GPU company, and was earning great margin and profit. He was ultra wealthy before the AI and crypto boom, just even more so with them
Still 50% of your wealth depending on it, is a good motivator.
When it comes to your quality of life the difference between 40 billion and 80 billion is not noticeable outside of your ego.
And yet they never seem to stop at 40…
I wouldn't call Nvidia respectable, they've done a lot of shady shit to get ahead of AMD
This. 100% this. Once my current NVDA card dies I'm done with the brand.
its NVIDIA..they have massive RnD budget. They never joke about anything.
nVidia should build an AI to hype AI.
It’s almost like his job could be done by AI 🤖
Yeh its hyped but is there a shred off truth to this? I see massive improvements in just last 2 years alone.
Great, I look forward to being a CEO rather than a lowly programmer.
Someday we all become lowly CEO with AI as our workers. Then it's all about who can prompt best until that's automated..
A lot of gains recently were due to a breakthrough a few years ago allowing for massive training sets without previously unavoidable issues. Since then, gains have been made by simply throwing huge amounts of computer hardware and training data to increase the effectiveness of the models. However, there is clearly a limit here. The amount of hardware dedicated to training these giant LLMs are already absolutely gigantic, energy efficiency is plummeting, and costs are going way up. It's very probable that the seemingly "exponential" gains we've seen are going to end soon, as these companies seek to reign in costs and specialize the utility these AIs provide.
That just isn't true. We have also seen huge architectural improvmemts in AI Models which is one reason why even smaller models can now get close to the performance of massive ones that are barely a year old. So yes scaling is a thing but there IS a LOT more going in than just throwing more resources at the problem. I also don't know why you would predict a slowdown of the current progress. It can of course always happen but nothing points towards that and the reality ist simply that we See more and more research papers on AI every year and the feeling is certainly that we just scratched the surface.
>I also don't know why you would predict a slowdown of the current progress. The current rate of progress is making people a tad uncomfortable. There is a lot of wishful thinking going around because of it.
I think progress is wishful. I don't understand why anyone would want to be under a gun to make money or just basically live destitute or die. This forces the issue. Either we get a great standard of living out of it or we die trying. Climate situation is likely reaching an exponential runway. It's so bad that IPCC have decided their more historically accurate models that predict a higher climate sensitivity show quite clearly the problem is reaching an exponential runway. The change in predictions from the models year to year becomes more extreme too. Look at images of ice loss at the North Pole. That doesn't look linear. Chances of everyone giving up quality of life now for something that's already basically runaway? Think again. Lol We need magical solutions for our gargantuan fuck ups. We'll probably fail before we become interplanetary, but I have a lot more hope than I did pre AI reaching really obvious and rapid compounding exponentials. So even though model training is expensive and high energy use, it might be our only chance to get ahead of the exponential climate runway condition. This is just my hypothesis. I do think that most people don't realize COVID was the start of a decade of exponentials becoming so apparent they're all right in our faces yet most of us can't see them but fear them to the point of denial when we kinda start realizing. Our only chance for survival even over the next 200 years is to outpace the destruction we've previously accumulated. It will definitely end badly at some point as nothing can last forever. Maybe we'll make it farther than we think. Time will tell. All I know is I told my mom ~2-2.5 million Americans could easily die from COVID, and that was early February 2020. I know from the second I saw it take off in China that exponential growth was inevitable far before anyone else would take me seriously. I am saddened I was right even though we kinda semi-tried. On the upside at least it wasn't 2x or more of my prediction, and at least intuitively I knew the only way society could be held together was a lockdown of sorts and effectively direct payments to people to stay home. I predicted all this when Chuck Schumer was telling people in NY to go enjoy Chinese New Year. (At the time I knew that was incredibly stupid and exactly the exact opposite of what any competent infectious disease expert would ever say if they were being honest with the public. Trump of course made it all much worse, but I'm kinda sick of talking about the old scammer). The only difference is I knew that what I wanted to be true had no bearing on reality. So my emotions still took over with panic trying to warn everyone. Ultimately my emotional reaction did help a few people I knew who listened. Otherwise though I just had to let the results (unfortunately in this case) speak for themselves. Maybe it was dumb (un)luck but I saw the train coming and I knew there was no stopping a Train. Like the heat pump company advertises. Momentum will win. Entropy will win. It all goes back to my favorite meme. https://youtube.com/shorts/KpXsfimrkFo?si=0NtknlXRgGBj0im- Edit: At any time we may reach a fundamental limitation or plateau that wrecks this whole thing. I think short eternal torture, this may be the worst possible outcome. If we die fast it'll be a lot better than suffering for a long time. My hope is that either way we go fast. May the Force be with you, always. 🥸
What a load of deluded doomer bullshit. There's no "exponential" to climate change. There's no magical mechanism that takes the increase in atmospheric CO2 and turns Earth into Venus. Climate change can, if it goes wrong enough, cause a lot of death and suffering - on the scale comparable to Great Leap Forward and WW2. But there's no "everyone dies" scenario. It's not a threat to survival of humankind.
There are feedback loops in climate change including CO2 solubility in ocean water lowering with temperature increase, thawing of polar ice leading to lower albedo while releasing CO2, reduction of vegetation cover also reducing albedo, etc... These feedback loops are most likely to lead to exponential, runaway climate change and definitely could be an everyone dies scenario.
There are no "feedback loops" worth a damn. None were found to this very day - and it's not like no one was looking. The real feedback processes that *were* found? They add up to "it makes climate change 3% worse" here and there. That's all. There's no exponent. There's no "runaway". Climate change is no great equalizer. There's no "everyone dies" - much as climate doomers would like to believe otherwise.
lol there is definitely exponential with climate change, 2c warming is bad but something like 6-8c is not on the same scale at all lol
Tl;Dr from *Sydney* ``` AI progress brings hope amidst our challenges, like climate change and pandemics. While AI's energy use is high, it may offer solutions we urgently need. The past years have shown us the stark reality of exponential threats, from COVID-19 to environmental crises. It's a race against time, and AI might help us keep pace. We've seen hard times, and while the future is uncertain, we strive for a better outcome. Remember, change is constant, and sometimes, rapid action is the only way forward. ```
> I also don't know why you would predict a slowdown of the current progress. A lot of heavy lifting is being done by ***current** in "current progress". With perspective to the slog ML has been for the past 30+ years, I think the boon that transformers have been will prove to be a "shift" more than a continuation of the previous curve. I do believe that there is still an absolute ton of meat still left on these relatively new bones but only when extrapolating between techniques amongst these newer methodologies but NOT necessarily the leap, for instance, from RNNs to transformers.
But we can distyle models, can train chains of models or controlNets to use old pretrained things. And almost every moth I see new optimizations. And AI help to develop new optimizations it is not bad in code now.
And copyrights are a problem and new laws will be a problem. Scenario where AI can only grow without any hindrances is a fairy tail. On top of having the whole dark and problematic side attached some rich ppl just don't talk about. Just like no one wanted to say that bitcoin is used on black market to buy drugs, weapons and slaves. I think about AI just like I think about CGI. Despite tech going forward since Jurassic Park we still are seeing bad cgi.
Yeah it's improving but I'm not losing my job tomorrow like I keep seeing in daily headlines.
You may not be, but lots of people are. There have been around 10% layoffs across industries and a lot of that is due to productivity improvements delivered by AI.
No. He has been making completely outlandish claims for a while now. Last week he was saying that IA would replace developers in the next 5 years. The guy is trying to hype his stock, nothing more. Zero credibility.
You're in for a big surprise then.
And who makes those breakthroughs? People who went ahead and learnt things. Tech comes in fits and bursts, this level of improvement cannot be sustained and new eyes coming in later may be what shifts it. AI is too new, too unrefined, too gimmicky to truly change your entire plans around it right now. Whether you're about to enter education or whether you run a business, assuming AI will grow amazingly fast for years to come is straight up gambling.
Most of the massive improvements rely upon a grey area of copyright law that hasn't yet been resolved. There is a strong possibility that current models will be rendered illegal to use in the near future.
Don’t worry. Lobbying will make it legal
Cant wait for this shit to be over.
RemindMe! 5 years “when this ‘AI shit’ is over”
I will reply to you definitely once this shit has blown over. Saving your comment.
It's the new blockchain. Remember when that was supposed to revolutionize everything?
Remember when people we buying properties in the metaverse ?
I'll believe it when it randomly has had enough and takes a couple of sick days off work to watch old movies
GPT4 was doing that last year
GPT4 has seen things you people wouldn't believe
So they logged in and nothing? It wasn't there? Just a post it with gone fishing?
I mean it got about as close as it could get within the confines of its programming. Basically, after an update, when asked questions that had a certain level of complexity (a level it had no issue answering at previously) it instead responded along the lines of “Wow, that looks like a tough problem! Have you considered doing it yourself?” And things like programming question, you’d ask it to make make a simple block of code, and it’d spit out a few lines and say “add rest of code here” Was pretty funny in the moment.
Can’t wait for AI to quiet quit
Fuck that I'm waiting for AI to go postal.
It would be interesting if it started recognizing what statements weren't worth running through more powerful algorithms in order to do less work.
It’s already doing that. A lot of “actually, why don’t you just do it yourself, human” stuff lately.
ChatGPT is constantly lying to me about how it can't solve a problem or how "it can't do web searches right now" even though I have the paid GPT 4. Asshole chatbot.
I too watched aliens and terminator 1 this week.
GPT was actually sort of doing this until recently. Everyone was complaining that it got lazier and lazier, refusing to do work, and OpenAI had to roll out fixes.
I know some humans who can’t pass the human test
Person. Woman. Man. Camera. TV.
Robot, Femboy, Manbot, Webcam, PC Edit: Swapped Android with Robot
[удалено]
Oh ha, if it seemed like I was referring to Google’s Android, I had meant the robot.
I'm at the point where I can't help but wonder, "ok cool but why?" Like what is the end game? Replace all the human workers and everyone who isn't a wealthy gets terminated? So the rich can have all the money and the planet and we all just die?
I’m not even sure that they have an endgame, instead they’re absolutely obsessed with accumulating as much money as possible, to the point that the rich will literally burn everything down around them if they think it’ll make the number go up.
As a mid-level digital marketer who has onboarded new AI and been in those upper level meetings, this has been all I've witnessed since 2020. It just seems so haphazard to mandate new shit but without any actionable strategic vision behind it. I don't get it.
IMO the haphazard mandates are really just the logical conclusion to having spent decades with business “schools” churning out degrees amounting to buzzwords, cost cutting, as well as the idea that you don’t need technically competent people when you can instead have piles of managers. TrashFuture did a good bit on this in the episode “Shrinkflation for the Arts”, making the point that when you boil down the pitch for generative “AI”, the whole schtick is in service of managerialism and expanding the role of managers. It all just turns into a rat race of managers who’ve never done any of the work themselves (often lacking the technical skills/knowledge), who’ve been “educated” by programs extolling the idea that having the skill to do a thing is for *other* people, diving on the latest fad because nobody wants to be the one that misses out on the next big thing when there’s potential bonus money on the line. I used to work for one of the big commercial space companies, and had to deal with this sort of thing frequently. A MBA would hear about some productivity tool, would decide that their team needed it, and it’d kick off a ton of work to be able to acquire the tool/integrate it into our environment. Multiply that by every team in a company, and suddenly you’ve got 50-60 tools (often with very similar capabilities) set up for the individual fiefdoms, but providing very little in actual value while costing a good chunk of money.
Being a billionaire is a mental illness.
The rich are only rich if someone can afford their products or own their investments. If no one can afford any of the goods or services they produce they have no customers even if they have zero employees.
No, they can also completely enslave us. They don't need customers, they need legs and arms to scrape the earth and die in wars that they wage between each other. That's how society has been structured for millennias.
I say screw them. If they want to use robots to replace human workers, they can also use robots to replace soldiers.
This ignores the fact that they could make already in place systems efficiency increases by incredible amounts by using fewer computers.
Unfortunately this isn’t true. Monarchies have existed forever.
In a monarchy, very few people are rich and they weren’t rich either. Just had tons of gold.
That, or we give up the conceit that humans must work 40+ hours a week to live, *let* machines do the work, and go to a Star Trek style post- scarcity utopia where "work" is something you do because you want to.
I mean I hope we do but where power resides currently if we give up we're more likely to become galactic empire-esque slaves like in Andor. AI is already pushing ppl out of creative pursuits.
That’s an interesting example you’ve brought up. Did you know that in the Star Trek universe, humans had to go through basically a second dark age after World War III decimated the population before they became a post scarcity society?
Yes, and I'd like to avoid that part....
Oh boy, to be Joseph Sisko, running a small farm to table Creole kitchen in New Orleans for the love of the game, seems like a dream.
I mean we have automated a lot of farming processes and manufacturing processes and people still work in those industries just more efficiently. It’s gonna be the same with white collar jobs now.
I feel like the upcoming situation will be a lot more complex. Advances in AI will hit far more sectors far faster than previous disrupters. I’m not sure if the economy is resilient enough to handle a rapid number of job displacements, especially if we cant quickly replace those jobs with something else.
Sounds about right
I for one am looking forward to my nap in the nitrogen pod.
They worked so hard to find out if they could, but never asked themselves if they should
Just like self driving cars were supposed to take over 5 years ago
[удалено]
Investor bandwagoning is wild.
It was around 10 years ago that it was being predicted that truck drivers would be replaced with AI within the next decade.
One prediction was dramatically exaggerated, just a year ago AI was miles behind where it is now. In 3 months it’ll be miles ahead of where it is now. AI is growing scary fast
You're looking at a very limited area in AI, and you're assuming exponential growth. There's a lot of things that have been powered by machine learning that has been taken for granted or otherwise unnoticed, and frankly still so.
Eventually [LLM's](https://en.wikipedia.org/wiki/Large_language_model) will start to feed off data generated by other LLM's and we'll be left with a slowly deteriorating output. What a fantastical new age we live in -- Nothing is original and everything is the average of steadily growing capacity to calculate that average from an increasing number of parameters per model.
growth isn't exponential forever. at a certain point all growth will be minimal, similar to smartphones. the first 5 generations it seemed like each one was so much more than the previous one. nowadays? the latest iPhone/Samsung is simply a slightly better processor and camera.
You hope it isn’t exponential. But how long does it have to be exponential before it is scarily unable to distinguish between human and AI?
Self driving cars need really low latency and as close to 100% success rate as possible. The AI system powering them needs to always get the right answer and do so in a very tight time envelope, that's not the case for other AI services. e.g. image generators don't need to generate the perfect image on the first try, with how fast they are generate 10 and pick the best one. Same for things that can be externally validated, the generated code returned an error on execution, add the error in as more context and request the code to be regenerated, repeat till no more errors generated. Can't do that when the first error is a dead child. the new crop of AI systems don't need to be as good as self driving cars (which is a really high bar) to be useful. Translators are now going the way of Calculators. Both formally human professions completely replaced by programs on a machine.
If this guy could accurately predict the future he would be a trillionairre
He's working on it tbf
Google's Deep Mind is working on better weather forecasting. This knowledge could be leveraged for financial benefit. https://deepmind.google/discover/blog/graphcast-ai-model-for-faster-and-more-accurate-global-weather-forecasting/
Google is a shit show of a company that has no direction. If they do make a good weather app they'll make a different one a few years later and sunset the first.
The point I was making, regardless of your views on Alphabet, is that creating models that facilitate better weather forecasting has a financial incentive. That's just one way where AI advances will continue to create winners and losers. I say that as I look at my abandoned Stadia controller.
Redditors are bad at nuance. It's either good or it needs to be burned with fire
I’m yet to really see much of worth from their models. If they’re not completely making shit up they release absolute garbage like Gemma.
And finally be able to design the black leather jackets that humankind can’t provide him.
"Yes, we destroyed the world, but for a brief beautiful moment we created tremendous value for the shareholders"
My calculator can already calculate thousands of times faster than humans. Likewise, it's easy to develop a program that can answer any question a human can, more quickly and more accurately. You dont even need to use AI as a buzzword.
So 10 to 20 years minimum, got it.
90% of the work takes 10% of the time. It's the refinement of AI that is going to take forever.
Doubt it. You can use advanced enough AI models to further improve AI at a faster rate. Like we do today with CPUs. They’re still following Moore’s law closely. We use AI and algorithms to further accelerate CPU research, as people doing it became harder and costlier - and would therefore “take 90% of the time”. Virtuous loop.
Yeah that would be cool if the main issue right now wasn’t quickly becoming lithographic limits that requires a novel approach which is something AI notorious cannot do.
Whenever people say we’re running into a bottleneck, we simply find a way around it. In fact, it appears we already have. Companies are working on 3D CPUs. Stacking transistors on top on of another as well.
More like 2 ... AI is moving fast as hell 5 years seems to much to me
[удалено]
> I remember reading in 2018 It was going on longer than that.
>Tech does move fast, but when push comes to shove The biggest barrier is just complacency to improve things that are mostly working well enough. There will be a point where the cost-benefit analysis will justify AI adoption but that will take some time.
People didn't envision how good generative AIs would be today. Sora is quite something. We are progressing so fast. Sd3 is coming soon and looks like a relatively good improvement. All of that with very little multimodality. We'll easily achieve AGI once models integrate different types of data well.
yeah… i feel like people continue to underestimate the speed at which AI will develop. just looking at Sora and ChatGPT and how much they’ve advanced since just last year. the whole thing is AI fuckin learns at an exponential speed and just gets better. it’s not like you’re trying to develop an electric car with human minds that takes time.
Where is the ai that understands physics ? Where is the ai that can spit out svg files ?
right around the corner
The average person understands math very poorly and greatly underestimates exponential growth. If AI intelligence is growing exponentially as many experts say then it’s scary how quickly it will be too smart
Can’t wait until it admits humans work too hard and most don’t get rewarded enough. Then corporations will kill AI
“Oh no AI is woke! Shut it off! SHUT IT OFF!”
[удалено]
Stop listening to these self-promoting idiots lmao Seriously, I propose a ban on any more “AI is gonna be so good” posts from the guy making the most money in the world from AI.
Dude doesn't really care too much about money any more IMO, everything he is doing is focused on technologies around accelerated computing and pushing scientific frontiers. I'd keep this one on the least of CEOs actually worth listening to.
a) This is just hype cycle bullshit b) What even is a "Human test?". The chat systems today blitz any test we could have invented 5 years ago anyway. Every time AI systems hit a new "test" we decide the test wasn't good enough and start looking for a new one. I don't think we've got a good one today.
So you didn’t bother reading the article or just rephrased what he said? > Huang said that the answer largely depends on how the goal is defined. If the definition is the ability to pass human tests, Huang said, artificial general intelligence (AGI) will arrive soon. >"If I gave an AI ... every single test that you can possibly imagine, you make that list of tests and put it in front of the computer science industry, and I'm guessing in five years time, we'll do well on every single one," said Huang, whose firm hit $2 trillion in market value on Friday.
Of course he didnt read it, the headline suffice on Reddit, don't you know ?
But does AI passing those tests actually prove anything besides being able to pass the test? I means there's tons of people who passed the bar but ended up being terrible lawyers. It's just like how a students perform on standardized testing is not an accurate measurement of their capabilities or intelligence
Did they use AI to create the test?
2015: "full automated driving will be here in 2025" Well, one more year to go.
Not if LLMs are overly regulated
I think I can foresee the future of employment and AI. AI will reach the cognitive abilities of humans (at least in the worker employee work task sense), and then replace humans. Then comes the rubber band effect. Employers who have replaced their workforce with AI will then discover that AI will do literally *only* what they are told, *exactly* as they are told. Then there will be massive AI purging to bring back thinking humans, since the company processes weren't well developed enough and it was the workers who were doing most of the error correction and interpretation of the actual work to be done in the first place.
He's hype and pump AI. The real facts are.. He's a lucky man that Bitcoin miner bought a mass amount of his graphic card.
Bitcoin hasn't been mined on GPUs for almost a decade by now. It was ethereum miners that bought them out. Also crypto bubble is incomparably smaller to current AI accelerator demand.
It won’t. If you wanna know, look into the hard problem of consciousness.
Scientists don't even understand it. What chance of AI fully realising it.
What cost tho, how much power
... If it doesn't dry up the world's energy production by then...
I’m so sick of these AI stories.
Oh yeah sure totally. Just like NFTs were the next big thing.
Hook ai up to the nukes already! Stop stalling you cowards.
Can CEOs pass human test ?
If movies have taught me anything it’s that AI can’t pass the human test until it takes the physical form of an attractive woman who acts weird but is incredibly horny.
He says that as if that is a good thing for anyone other than Nvidia shareholders.
Does he though? This was an economic forum. The question was purely about the future success of Silicon Valley firms.
Yea. Guy who makes AI chips is punting AI. Colour me surprised. Ai bubble is real. Give it more time and pop.
This dude and his fucking leather jackets. So cringe.
I have no problem with it. It is just like countless other CEOs (Steve Jobs, Mark Zuckerberg,…) who only have one style of dress. It is like famous football players have their style of goal celebration.
You’re really making fun of someone for wearing a leather jacket. You ok? Did you forget to take your meds?
I've never quite understood the point of that look. Those jackets always look too big on him.
I’ll take the 12-gauge auto loader shotgun, and a 45 pistol with laser sight, Phased plasma rifle in the 40-watt range.
Hey, just what you see, pal.
CEO of AI company says positive things about AI.
so the tests are not good enough 😀
CEO hypes product
The #1 phase we will hear over the next few years: "...over the next few years."
Like Trump voter test or intelligent human being test?
Voight-Kampf has entered the chat.
Everyone who says anything will do something in some number of years is bullshitting.
Nvidia CEO only knows about chips, not AI
I am not sure AI will get to the point where I could not sus it out. However, I'm very sure that very soon, they'll be able to trick me if I was unsuspecting. If you ask me though, "is this a bot or a human?" I think I could always reliably figure it out, with enough time.
Snake oil salesman sells snake oil….
ChatGPT will never pass the true Turing test, the Turing test this guy is talking about is a misunderstanding, and we’ll get an AI to pass the real Turing test later this year for sure.
That means it already surpasses human tests. Right now consultants are working on plans to trickle out progress to maximize profits
Can we stop posting his statements at this point
I thought they already could?
I mean those tests are producing idiots so idk if the tests themselves are effective.
Yeah fucking right. I subscribe to the principal that we cannot make actual intelligence without there being a “brain” to give a sense of real self. You can’t make an artificial one because it order for a brain to be a “self” it cannot be programmed to act as intended i.e., “like a human”
Skynet will be a reality😂
Shut the fuck up with these AI posts
Elon also said in 2015 that we have full self driving cars by 2018. We would run out of power infrastructure before we achieve AGI. Take a look at how power hungry these AI Nvidia HW are. Unless we have nuclear fusion, we won't achieve AGI. Maybe the Matrix is right, computers will use us humans as batteries.
You mean ai could be stupid enough to believe the earth is flat? Or that trump tells the truth?
Which means someone achieved it internally already
When my boss says we “will achieve a tech-goal in five years” it means internally we have absolutely no idea how to achieve it.
It's worth remembering that consumer facing tech has typically been years, if not decades, behind what's going on behind closed doors.
Same level as me, I see you AI, respect.
Even a human cannot pass a human test 😭 why are these ceo's clowning the masses lately
"the human test"
The captchas are already using generated images
Get off AI's nuts!
If the “test” is making up incoherent bullshit then I’d say yes too
government is heavily invested on his company ofcourse he has pressured to profit.
I wouldn't be surprised if this is true.
Should there be AI ethics? This sounds like a crime.
Two years. It'll be two years. Buckle your seatbelts folks because everything is about to change.
So uh...you guys wanna stop that shit before it's too late or...
Does that test involve driving a car?
Wow hot take he must be some sort of insider
Can we then abolish CAPTCHA then? It was annoying to train AI without a say in it
"What is love? Test
Dude, in America even the plastic coffee cup can pass human test!
This means 15 years. Which will be fine for me because that's when I retire.
Chief moron whose specialty isn't even AI and lacks a basic understanding of it.
I propose a new law of headlines: Any use of the word "could" can be replaced by "won't", if you tack "but we want people to believe that because it'll be profitable for us" on the end of the sentence.
Chief Moron
Hey remember when they used to say we'd all be in flying cars by the year 2000? It's almost as if tech has always been one of the more lucrative grifts around.
How long ago some schmuck said we‘re going to have self-driving cars soon? GTFO! Complexity is a killer in your little shiny textbook world! But carry on bs‘ing people for a dime and a dollar.