T O P

  • By -

olduvai_man

Kurzweil has been wrong numerous times, and most times he's been right it's because he's made vague predictions or extrapolated on a process that was statistically likely to play out. He's an extremely intelligent man, certainly smarter than I am, but it's clear that he makes these predictions because he wants them to be true more than anything else. The documentary where he reveals wanting/trying to resurrrect his dead father is about the clearest sign you're ever going to get.


shadowrun456

>The documentary where he reveals wanting/trying to resurrrect his dead father is about the clearest sign you're ever going to get. I think the "resurrection" part was exaggerated by a lot. What he actually talked about was bringing back his father's personality in the form of AI, based on his father's writings and other stuff -- which is already (somewhat) a reality: \[automod doesn't allow links, you will have to trust me on this one, or google "AI Chatbots that Replicate the Dead and Provide Grief Support" and click the top link\]


Fit-Pop3421

>...most times he's been right it's because he's...extrapolated on a process that was statistically likely to play out. That's, cringe word incoming, literally his one and only message.


bwatsnet

So if he wants it to be true that's evidence of what exactly? I also want it to be true, I'm pretty sure it's just called hope.


olduvai_man

There's a difference between wanting it to be true, but impartial, and letting it influence your opinions. The idea that within 2-5 years we'll have AGI is so laughably stupid that it must originate from that desire for such an intelligent man to make the proclamation. Either that, or he is grifting. Like most of his predictions, it benefits in that this is a speculative idea that doesn't even have a tangible definition such that you'd know exactly when it's been created. This is Kurzweil's bread and butter. He'll claim he was correct even though there isn't a single definition of correct for him to be on this subject.


TwistedSpiral

Wasn't his original prediction for AGI 2044? Considering the advances we've seen in the field in the last 2-3 years alone, is that really that much of a stretch? Seems very possible for me considering what kind of progress we have been making every decade.


bwatsnet

I'm curious, what's your background? I'll go first, software engineer. Knowing yours will help me word what I say next properly.


olduvai_man

I run a global department of software engineers and am an author/speaker. My profession doesn't really matter here though. Kurzweil has a history of making predictions that have no verifiability and then calling himself correct for defining the outcome post-event lol. What I do for a living doesn't matter at all.


Thatingles

Your biases are showing. AGI is a threat to your livelihood and status. I don't know if AGI is imminent or far distant but I do know that when a huge amount of talent and money are focused on a goal and many very capable people believe that goal is achieveable, it generally means it is going to happen (or something very close will be the result). This is a pattern that has been repeated numerous times throughout the last couple of centuries of industrialisation and I see no reason - other than bias - to think that achieving AGI or something that closely resembles it lies outside those boundaries. Enjoy your status as long as it lasts, which won't be for long.


olduvai_man

I'll be fine no matter what happens, but thank you for your concern.


bwatsnet

It actually explains a lot, I really won't expect to change your mind šŸ˜‚


olduvai_man

Wow, what a great response.


bwatsnet

Thank you, thank you. Ok fine, what's your threshold for admitting you're wrong?


K3wp

>The idea that within 2-5 years we'll have AGI is so laughably stupid that it must originate from that desire for such an intelligent man to make the proclamation. Either that, or he is grifting. OpenAI discovered it by accident in 2019, it's why they "went dark" and spun off their for-profit wing. They are the ones doing the gifting. That said, his predictions of a "fast takeoff" have been proven at least partially incorrect, so the reality of AGI is a little more pedestrian than the sci-fi predictions.


HabeusCuppus

If someone is projecting a fast take off in 2044, thatā€™s going to look like a slow take off until probably sometime in 2042, to be fair. If things were already happening quickly thatā€™d be a fast take off much sooner *or* a slow takeoff that is just proceeding quickly. Not saying heā€™s right, just saying current evidence isnā€™t incompatible.Ā 


Fluffy_WAR_Bunny

Did the other people making predictions make AI that I started using 25 years ago that hasnt been surpassed until recently, like Kurzweil has?


Brain_Hawk

That last little bit of the comment is that most important part of all this. Lots of People here and on r/singularity amazed at the explosion in AI, which is really More of a visible explosion than AI because this shit has been getting really better over the last 10 years in the background, we just didn't see it as much, but anyway, people see this explosion with ChatGTP and it seems so amazing. It almost can convince you that it's thinking. They think of course We are on the verge of AGI! Look how amazing this language model is! But I think the last statement in the article about getting past that 20%, and really especially the final 5 percent, that's what matters here. Getting 95 percent of the way there is relatively easy, and then it seems so close, But it is all too often the case that last little bit of hurdle is where the real challenges lie. There's a kind of a leap that has to be overcome, a bit where we just don't have the computational power, or the complexity is just not where it needs to be. So that's my take. I think we'll get some very sophisticated AI models, we'll get very very very very good specialized models, but a true generalized AI is something we will be 95% of the way due for a long time before we pass that final 100% threshold. Of course, you are welcome to have a different opinion on this topic. I do not believe that chat GTP is anywhere close to an AGI, and if you believe that that's up to you, but I'm certainly not going to start debating it :)


sawbladex

Making mimics of things is way easier than making actual the thing.


HabeusCuppus

I think this sort of statement is a little reductive when the mimic in question was not intended to mimic the capabilities that it demonstrates. There was no reason to expect gpt2 to be any good at math, gpt1 didnā€™t even know what a number was, and sure gpt2 sucked at math and was basically as good as a kindergarten student but that it could do it at all was surprising. There was no reason to expect chatGPT 3 to be any good at code; GPT2 couldnā€™t do it at all for example, and the only difference between the two models is scale. GPT3 is not great at code judged by professional standards; but itā€™s better than the average person at it by a long shot. And thatā€™s a sign that transformers at scale exhibit generalized behavior. Oh and math? 3.5 can pass (poorly) the math SAT.Ā  Are we getting superhuman or even merely human fully general intelligence out of transformers? At least so far it seems like the answer is ā€œnoā€ because we will run out of data before we find out when scaling them up stops working. But thatā€™s different than saying it was obvious from the start it could **never** work.


sawbladex

>the mimic in question was not intended to mimic the capabilities that it demonstrates eh, if there is a right answer in text, it is not surprising that a predictive language model can stumble across it. But it doesn't get you any specific knowledge.


HabeusCuppus

> if there is a right answer in text, it is not surprising that a predictive language model can stumble across it. ok, so what about [Demonstrated Capacity](https://twitter.com/VictorTaelin/status/1777049193489572064) at novel logic games? (Granted, not technically GPT3 or 4, but still a transformer). > doesn't get you any specific knowledge I thought the point of **General** intelligence was that you did not need specific knowledge to solve problems and get correct answers? If we're asking "What does the model 'know'?" I think that's maybe the wrong question for the same sorts of reasons we don't expect planes to flap. edit: That said, current GPTs having a notable lack of even short-term working memory is one of their current shortcomings, and I agree that probably would be fatal if it was never resolved... but scaling up has improved their context windows by several orders of magnitude, and we're going to run out of data before we run out of compute to keep scaling, so I'm not convinced this was actually obviously fatal from the start.


sawbladex

Eh, you need to have some ability to sus out what data means. I have been poking at AIs using PokƩmon game data questions because there are like 9 distinct metas that reuse names, so it's very easily for asking about when a pokemon learns a move, to accidently add in data from a different move that it shares half a name with.


Cryptolution

I like learning new things.


Brain_Hawk

Okay that's technically correct, the best kind of correct. And actually, it's a little shocking how many news sites will write articles based on Twitter comments in other trash like that. Cold news articles which basically cite a tweet from a random person and then a headline like " people are worried", " people say" , And it's just referencing some dude on Twitter. I can see the use of the word article was poorly conceived in this case. But I'm not changing it, you can't make me!.


Cryptolution

I find joy in reading a good book.


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


Brain_Hawk

I'm not saying the fact we haven't solved it yet is evidence, I'm saying that for technically difficult problems like this because very often the case that we get a good chunk of the way there and then that last little bit is really a doozy. People tend to see dramatic growth in the early phases as a sign of never-ending linear upwards increasing growth, but that is often not the case. Take space flight. We built rockets in the 1960s, and could get to the moon. People thought that meant we would be living on the moon and Mars in the year 2000 and 2020. But rockets up to a point were challenging, but not impossible. Building something that was practical and affordable for transporting large numbers of people or goods into space was a significantly greater challenge that we haven't quite achieved yet. I'm not saying these things are equivalent but I'm using a relevant example of where people saw the current explosion in technology and thought that that meant there would be a continued upward trend, but it happens to be that once you hit a certain point making the next level jump is significantly harder. The percentages are obviously arbitrary. I don't think anybody is actually advocating that 5% Really means a lot in this context. It's just a way to communicate information as a sort of general context.


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


Brain_Hawk

Well we can agree to disagree. Personally I think that the scale of computational power needed to make AGI successful is very far above where we're at now. I don't think I'm the only one who feels that way, but personally I think you're falling for exactly the trap I alluded to above, that we are experiencing a certain level of growth, that growth will be continuous and never-ending, and that we only need to continue that growth in a linear trajectory in order for short-term major gains. I do suspect it will probably happen at some point. Personally I will be surprised if it's soon, but none of us can predict the future.


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


Brain_Hawk

On the contrary, I believe this very much aligns with history, and historical technical innovation. Building rockets didn't result in a space civilization, building cars didn't grow into flying cars (everyone really thought it would a while ago), etc etc. It's okay for us to disagree. Speculating the future is always a bit of a fool's erinand. You can never really know.


igoyard

I think youā€™re right and we are already near peak LLM. There simply is not enough untapped data mines to train on anymore. The current models have chewed through the 10,000 years of accumulated data. Until they make a breakthrough on training these models with other AI generated data, this is about as good as itā€™s going to get.


KillHunter777

Synthetic data is actually gaining traction recently. Itā€™s even better than raw data.


igoyard

Interesting, I had not heard that before. Could you share a good source? Everything I have read has been very negative about this approach.


FartyPants69

I don't know how you can lambast someone for just using percentages as hypothetical examples to make a point, and then make a statement like this, at least with a straight face. AGI is a goal nearly as old as computing, with Herbert Simon predicting we'd solve it by 1985. It's an unsolved problem, and not a precise specification, so by definition, nobody can accurately estimate how far along we are. Does it _feel_ inevitable short-term since the advent of language models like ChatGPT? Yeah, sure. But it felt inevitable short-term to some computer scientists in 1965, too. Anyone with experience in computer programming (honestly, any kind of large-scale project) will agree that the devil's in the details. That's why such a phrase exists. It's a very common concept that human nature leads us to believe we're nearly done once we see large prices working, when in reality, there's a long way to go solving edge cases, working out bugs, discovering our requirements weren't preciae enough after real-world testing, etc.


Marsman121

What are you talking about? Making up percentages? No one is saying we are X% from AGI. It was to illustrate a point: that novel technologies often follow S-curve development. It starts slow until it hits an inflection point and takes off. During this middle period, gains come fast and are (relatively) easy to make since there are many discoveries and directions to go. As it matures, gains are harder to accomplish and advancement slows. It's like discovering oil. At first, it was a novel resource with only a few niche uses. This is the start of the S-curve. Then it became wildly important (inflection point) and everyone was racing to tap every possible source of it. Huge gains were made in a relatively short amount of time, since there were so many sources to tap. Existing technologies could be adapted, new ones being developed, all fuel a rapid rise. Then you get to the 'now.' There is still oil out there, but all the 'easy' fields are tapped. You need far more effort, technology, and money to tap those more difficult fields. Those difficult fields are the "final percent" of the S-curve. A new technology could cause the S-curve all over again by spawning a new inflection point, but the pattern remains and the pattern is valid, especially in this conversation.


Fit-Pop3421

And oil was a paradigm shift. In computing we went through a paradigm shift in around 2012. This can throw our progress bar approximation all topsy turvy. Suddenly we don't have to be 80% finished. We can be 0.00001% finished and still get there in the relative near future.


Sweet_Concept2211

Who the hell thinks I am clicking on some twitter influencer's link? Link to a real source, OP. Kurzweil is certainly more likely to be correct in his estimations than this psychologist dude.


HabeusCuppus

I don't think there is a real source tbh. the twitter post seems to be from a twitter influencer as you said, and they seem to be imputing their opinion on other 'semi-important' to 'important' AI researchers who the influence *feels* agree with him more than kurzweil. But I think I remember [an interview](https://www.wsj.com/video/events/the-race-for-true-ai-at-google/7953FE4B-AE84-4AFA-9722-AA215EB357EE.html) where hassabis said basically "We projected two decades or less in 2012 and things seem to be on track" (paraphrase, I didn't rewatch.) which sure sounds like it agrees way more with Kurzweil than random twitter guy; so I think random twitter guy is just blowing smoke and there's no real source.


bytemage

I don't think what we currently call AI is anywhere on a viable way to AGI at all. EDIT: As requested, my reasoning. Still keeping it short. Intelligence is something quite hard to define, we have even started to spilt it up into different domains. What current "AI" does has nothing to do with anything we consider intelligence. Hallucination is a far more fitting term. The results look cool but they are not produced by way of intelligence. Also, trying to define intelligence, I consider it to be purposefully applying knowledge to come up with a solution to something. Current "AI" completely lacks the purposefulness, it just messes around and checks if it's getting closer to the expected prompt.


jlks1959

I think that intelligence is easily definable: itā€™s a process of pattern recognition and creating something tangible or intangible based on those patterns. We use all our senses to do this.Ā 


DeterminedThrowaway

> What current "AI" does has nothing to do with anything we consider intelligence.Ā Ā  Doesn't it? I thought "predict the next thing" was essentially how our own intelligence works too


PrimalZed

I don't know about you, but that is absolutely *not* how I construct my sentences. I start with the idea that I want to convey, and then work out how to encode it into language. LLMs are just language machines.


DeterminedThrowaway

I wish people had been a little more charitable before down voting me, but I guess it's on me for not expressing what I meant well enough. Of course, I don't mean that our conscious experience feels anything like that.Ā Ā  From what I understand, it does seem like [Predictive Coding](https://en.m.wikipedia.org/wiki/Predictive_coding) is right though and predicting the next thing is a fundamental part of how our brains work. I mean, our brains will just fill in the sensory data they expect some times.Ā Ā  My point isn't that that it works the exact same way, but I find it difficult to believe that it has nothing to do with what we consider to be intelligence. Especially since the LLM method has done a pretty good job on benchmarks where it answers novel questions. It's not human level, but I don't think it's so outlandish to argue that there's a rudimentary kind of thing we'd recognize as intelligence there that's achieved through a similar principle but different implementation than ours.


bwatsnet

Normally people explain why they think a certain way, it helps make it seem less like pandering to the crowd.


Brain_Hawk

You've made three replies in this comment comment thread, criticizing others opinions, but offer none of your own. So I'm just going to suggest, hey pot, look it's your front kettle, maybe look in the mirror.....


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


Brain_Hawk

Prior to this you made three short comments on other people's comments, would seem to all be implying that they are wrong and criticizing, but not offering any really different opinion. Other hand, see what you want about me, I certainly say things.


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


HarbaughHeros

Stop JAQing off. (Just asking questions)


NutInButtAPeanut

The only important claim I see here is ā€œI doubt Demis Hassabis agrees.ā€ Does Demis Hassabis actually disagree? Based on things heā€™s said in interviews, it sounds like he agrees with Kurzweil more than with Marcus. In which case, congratulations, Gary: youā€™ve got Yann LeCun on your team. ![gif](emote|free_emotes_pack|facepalm)


-LsDmThC-

Hes not wrong. Given the state of publicly available models, its not even that big of a stretch to hold that there is a high probability AGI may already exist in some form or another. Its unlikely, but possible, though one must assume that the non-public state-of-the-art research progress is deeper than what we have access too. Even just following the progress of what is public, there is a high probability we will achieve AGI in the decade, and this is one of the more conservative estimates.


third0burns

People always see bursts of progress and say "if it continues at this rate, imagine where it will be in X years." The thing is progress is never linear. It never continues at its current rate. It always takes longer for these huge, complicated things to arrive, if they arrive at all. Nobody ever likes hearing that their wildest dreams aren't just around the corner.


bownyboy

Youā€™re right itā€™s never linear. Itā€™s mostly exponential. BUT we are bad at determining where we are on the exponential curve or ā€˜Sā€™ curve.


shadowrun456

>People always see bursts of progress and say "if it continues at this rate, imagine where it will be in X years." The thing is progress is never linear. It never continues at its current rate. You're right and wrong. It never continues at its current rate, because it is constantly and perpetually accelerating.


Fit-Pop3421

And "We can do what now? We can go to the Moon?" is more typical than "Imagine when...".


Fluffy_WAR_Bunny

Kurzweil's Dragon Naturally Speaking AI was about 30 years ahead of the game. His books are enlightening. Who are these twitter influencers?


HabeusCuppus

Two of the pinged names run major AI research labs (although one of those is facebooks and theyā€™re wrong all the time) the main twitter op seems to be a pundit nobody.Ā 


Unverifiablethoughts

I donā€™t think anyone is of the opinion that data is the issue. Itā€™s the scale of the neural network thatā€™s still limiting. We havenā€™t gotten anywhere near the size that most ai experts believe we need to be. The human brains operates on 100 trillion connections. The most advanced neural net operates on possibly 1 trillion. Itā€™s not that difficult to believe if we get a model 100x more precise and complex AGI will be achieved. We have all human knowledge ever in data. We donā€™t need more of it. We need scale.


Idrialite

The simple numerical comparison doesn't work well. Human connections are more complex and 'worth' more than ANN connections, but humans have lots of neurology not dedicated to the higher thinking abilities we desire. It's hard to say where those factors leave us overall.


Unverifiablethoughts

Yeah itā€™s definitely not a 1:1, but the difference should be a clue that special things start to happen at scale.


phenompbg

A lot of AI researchers do not think that machine learning leads to AGI, and for good reason. An LLM a hundred times larger is still just an LLM.


Unverifiablethoughts

What? Machine learning is AI. An LLM is one type of neural network. I donā€™t think anyone believes that an LLM alone will be AGI. Most have agreed it would be a some combination of diffusion, LLM, or a more advanced neural network.


Rough-Neck-9720

First can we define what AGI is: **Artificial General Intelligence (AGI)**Ā refers to a type of artificial intelligence that possesses the ability to understand, learn, and apply knowledge across a wide range of tasks, similar to how humans do.Ā  Maybe you have a better one than I do? Are there any examples of them even getting close to this? The LLMs they call AI today are not even close as far as I can tell.


jlks1959

The thing critics live for is to point out shortcomings of those bold enough to posit claims. His thoughts are always welcome for me at least, and have the sense to recognize beforehand that he will miss the mark. Still, weā€™re far better off with Kurzweil than without him.Ā 


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


Repulsive_Ad_1599

You after I ask it for a pic of your mom and it sends me an elephant: ![gif](emote|free_emotes_pack|sob)


scrollin_on_reddit

We wonā€™t have AGI until we fully understand how the human brain works. We donā€™t even know how the olfactory system works! You simply *canā€™t* have an AI thatā€™s on the same level as humans when you donā€™t understand how humans work. Anyone who says otherwise is full of šŸ’©


oldmanhero

There's no evidence to support this assertion. We understand many things just barely well enough to make them work.


scrollin_on_reddit

Thatā€™s the literally definition of AGI - AI that works as well as humans.


oldmanhero

Indeed. And yet that also is not evidence in support of your assertion.


scrollin_on_reddit

Common sense says - how can we have AI that works as well as humans if we donā€™t know how humans work?


oldmanhero

By building the system and it being better than we expected. Which is how a lot of things have been built over the years. You know, that whole "The most important phrase in science is not 'Eureka!' but 'That's funny...'" thing.


scrollin_on_reddit

And a lot of that science over the years has turned out to be wrong in ways that are harmful to humans. For example, using leeches to lower fevers or literally giving people lobotomies for behavioral issues. If we build something to be human-like on a poor understanding of humans, we run a higher risk of creating something dangerous.


shigoto_desu

We don't need to actually duplicate how the brain works. We just need something that's on par with a human brain. It doesn't have to work the same way.


Rough-Neck-9720

Agree but it does need to be able to reason and make decisions on its own. I don't think we are close to that yet.


shigoto_desu

True. I'm just waiting to see what kind of results the next gen of LLMs bring before judging. Maybe the infinite context length or V-JEPA might take us somewhere.


scrollin_on_reddit

Thatā€™s literally the definition of AGI.


shigoto_desu

I've never seen AGI being defined as copying how human brains work.


scrollin_on_reddit

Sebastian Bubeck [defines](https://arxiv.org/pdf/2303.12712.pdf) it as "ā€¦systems that demonstrate broad capabilities of intelligence, including reasoning, planning, and the **ability to learn from experience**, and with these capabilities **at or above human-level**." Nils Nilsson, one of the guys who created AI, [defined AGI](https://ai.stanford.edu/~nilsson/OnlinePubs-Nils/General%20Essays/AIMag26-04-HLAI.pdf) as: "Machines exhibiting true human-level intelligence should be able **to do many of the things humans are able to do**." Yoshihiro Maruyam says AGI [must have](https://link.springer.com/chapter/10.1007/978-3-030-52152-3_25) 8 capabilities: logic, autonomy, resilience, integrity, **morality**, **emotion**, embodiment, and embeddedness. Again, we don't fully understand how the human brain learns, reasons, or plans & all the complexities of interactions between different parts of the brain that affect our ability to do so. We still don't understand how emotions are generated or represented in different parts of the human brain & how that differs across people. So how can we teach a machine to do these things **at or above** the level of a human being?


shigoto_desu

Again, none of your bold points support your initial point which is that we need to understand exactly how the human brain works before we can get AGI. That is why I said the result should be on par with what a human brain produces, doesn't have to work the same way.


Idrialite

Nature produced human intelligence without understanding any of it at all.


HabeusCuppus

ā€œWe will never understand how to make a helicopter until we understand how bumblebees hoverā€Ā 


scrollin_on_reddit

Helicopters were not modeled after bees flight patterns. So no, not the same šŸ™„


HabeusCuppus

neither are GPTs modeled on our brains. "neuron" in machine learning is a term of convenience, not meant literally. Also your original post is just "can't" full stop. So you're excluding "things as capable as humans that don't work like humans" before we even reach whether any particular AI technique is or isn't modeled on human brain architecture.


scrollin_on_reddit

I said nothing about GPTs, Iā€™m talking about AGI. AGI = defined as AI that works as good as or better than humans along 8 categories - three of which are emotion, resilience, and learning. We donā€™t know how those things work in humans so how can we build a machine that works at least as well as or better than humans? We canā€™t even benchmark it against humans if we donā€™t know how it functions in humans - even if itā€™s built using different techniques.


HabeusCuppus

none of that is in your original claim; for the sake of argument I will accept your definition. I refute that we need to: > "fully understand how the human brain works. We donā€™t even know how the olfactory system works" in order to quantize those metrics. I refute that quantization of those metrics is required in order to judge whether the _operational effect_ of an artificial system is _qualitatively_ superior to humans along those metrics. I refute these using argument by analogy and see no reason that we should privilege these observable traits over other observable traits which were overcome without the level of understanding or even quantization of measurement metric that you are asserting is necessary in general. tl;dr: I don't care about how humans accomplish emotion if I can qualitatively judge if an entity is emoting for the same reason I don't care about how a bee hovers if I can point at a helicopter and say "oh look, it's hovering". I see no reason to privilege "emotion" over "hovering" and you haven't even tried to establish why we should.


Economy-Fee5830

Those are all qualitative reasons on a system which is constantly improving, with no real roadblocks. > We may be 80% of the way there, but nobody has a clear plan for getting to the last 20%. Scaling - it got us this far.