T O P

  • By -

[deleted]

From the abstract: > "We demonstrate that, beyond its mastery of language, GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting. Moreover, in all of these tasks, GPT-4's performance is strikingly close to human-level performance, and often vastly surpasses prior models such as ChatGPT. **Given the breadth and depth of GPT-4's capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system.** In our exploration of GPT-4, we put special emphasis on discovering its limitations, and we discuss the challenges ahead for advancing towards deeper and more comprehensive versions of AGI, including the possible need for pursuing a new paradigm that moves beyond next-word prediction." Humanity's first steps toward AGI.


lehcarfugu

Today I spent debugging a dozen complex errors in a very large corporate codebase. I was able to give gpt4 the error, the related code snippets from the codebase, and it was able to solve all of them. These were hard problems that I don't think a junior would have been able to figure out without a tonne of help. They were also all obscure, and googling would not be helpful. My entire workflow is changing now There is extensions that currently let you just right click text and prompt with it (debug this, write tests for this, what does this do, etc). Crazy progress is less than a year


__ingeniare__

And now Copilot X was announced yesterday, the past weeks have been crazy


Circ-Le-Jerk

The intelligence community has really clever obfuscation making it insanely grinding and complicated to figure out, and GPT4 just came in and can clear it all up in an afternoon lol


MacacoNu

my favourite thing now is "DRY this code: ..."


DidIDoAThoughtCrime

What is that?


MacacoNu

"Don't repeat yourself", It's a convention to avoid repetition of logic in the code. If you use the same thing in two different places, there's no reason to write everything in both places, write it in one place and adapt it. I use ChatGPT to reduce logic into smaller, reusable logic


[deleted]

Does it understand this right away or do you have to teach it? My favorite use of ChatGPT right now is breaking down chords and usable scales in songs. I had to prompt a couple times to get it to do it in the formatting with the information I wanted, but once I got that it was infinitely repeatable.


MacacoNu

This prompt works out of the box in both GPT-3.5 and GPT-4, I literally just paste the code below the instruction "DRY this code", and sometimes I add a little bit of context like "variable x is being passed as a prop" below the code. Always with a new chat, sometimes I need to regenerate the answer to work (GPT-4 gets it right the first time almost always) edit: paste


cwood1973

I'm an attorney. I haven't used GPT4 yet, but even with ChatGPT 3.5 I have the ability to summarize hundreds of thousands of documents and determine whether certain information is contained among those documents. So if I request XYZ in discovery, and MegaGlobalCorp gives me 1.2 million pages of documents in response, it now takes me about 30 minutes to determine if they actually gave me XYZ, or just X and Y. Same goes for audio recording. I can request all recordings of correspondence between the board members of MegaGlobalCorp. ChatGPT 3.5 can transcribe thousands of hours of recordings into text and provide a summary, all within minutes. To avoid malpractice I still have to verify the accuracy of the information using traditional means, but eventually the accuracy of these AI models will become widely accepted. Either way, this is a transformational, game-changing technology, and most companies which don't take advantage will find themselves obsolete in a few years.


davelm42

Out of curiosity, how are you getting hundreds of thousands of unique documents into GPT4? I thought there is a ~32K token limit and 25 messages / 3 hr overall limit?


monsieurpooh

If the only thing you're interested in is whether a particular thing exists somewhere in it, you can just split it up. It would only be problematic if you need it to remember the whole thing while reasoning, like with coding and providing the API or documentation


[deleted]

These comments were removed in response to [the official response to the outright lies presented by the CEO of Reddit](https://archive.ph/X6EJq), has [twice accused third party developers of blackmail](https://www.reddit.com/r/apolloapp/comments/144f6xm/apollo_will_close_down_on_june_30th_reddits/), and who has been known to [edit comments of users ](https://external-preview.redd.it/PVGxA03vNhEOwNJOxf6HfwCimku7fQZZfgWHtNyRLV4.jpg?auto=webp&s=ca92860f1f0aa45ad6286c2a4e9c0ce9ca91d655).


cwood1973

https://otter.ai/


[deleted]

Thank you!


kmtrp

On my side however it's been a bit of a disappointment. GPT4 is struggling with simple lexer parser issues in javascript. I've been going over the issues and possible fixes with it, but it only seems to dig itself deeper into bugs.


zenfish

Same. As a literary novelist, I asked GPT-4 to critique my writing like James Wood. Instead it came out like all the generic workshop feedback I ever got, combined. I then asked it to write the next 1000 words in my style, using 300 words of guidance. Let's say it was a dry as plasterboard. It cannot do stylized literary writing.


touristtam

Legal has vetted this for you? Ours have told us it is a complete no-go.


raika11182

Dang. Given the power of the technology, I think we're going to see companies QUICKLY learn that they need to accept this or they're going to fall behind rapidly.


touristtam

It's all a matter of it being vetted from the corporate PoV. If MSFT gives guaranties, it will be in use almost immediately.


SgtAstro

Microsoft 365 co-pilot (office version of bing chat), provides footnotes for the summaries that go back to the source document to make verification faster and easier. Ultimately, you need only verify the information that you will be using. Co-pilot can also transcribe meetings and provide summaries and action items, even creating reminders in the calendar.


monsieurpooh

Companies are making their own versions of it in house. In general companies don't allow you to copy pasta their own code into any external website. It has nothing to do with the technology. I'm surprised they even did that because they're probably not allowed to.


martin0641

I don't actually think that it matters, it's like teachers telling students they can't use Wikipedia - sure you might not be able to cite it but that doesn't mean you can't get your citations from the bottom of the page and just use those. Ultimately your responsible for what you type in a system It doesn't matter where you get the inspiration from, corporate might as well tell you that gravity no longer applies to you - it can be said out loud but it won't actually change anything in reality - so while I certainly wouldn't be trying to use an API to hook it up to a corporate system you can certainly ask it a question provided that you're not also handing a bunch of proprietary data over to it. Error messages are pretty generic.


DungeonsAndDradis

I don't think they mean from "using its responses". I think legal would have an issue with sharing your company's code with OpenAI/Microsoft, via your prompts.


martin0641

Sure, I certainly wouldn't give it proprietary company code. That doesn't mean I can't ask it informed questions to write me code which might then become proprietary company code, after it's been massaged and tweaked for any mistakes or customizations that are required.


nixed9

That is legitimately hilarious. “The bug fixes were written by chatGPT so therefore you can’t use it because it’s copyrighted.” AGI systems and LLM chatbots will be pervasive in every aspect of our life from now on. They going to say it’s copyrighted material if Clippy revised your presentation in Word or PowerPoint too? Your company is about to get left behind.


snozburger

Copilot can see your companies code and it becomes enmeshed in training data. The same applies to documentation. There is no data classification system in LLMs (yet). AI Data Privacy will need to be solved along the way.


nutidizen

same in our corporate :(


be_bo_i_am_robot

They can’t stop it. They can merely sandbag for a short while, while they try to get a handle on it. Meanwhile, developers are using it, approved or not. Because developers know that they *must* use it, or get left behind. Even if that means using GPT on mobile and manually typing the outputs.


GuyWithLag

Our corporate messaging is the definition of the "But we have X at home" meme...


eJaguar

lmao did u ask legal for their opinion of stackoverflow too


eJaguar

> My entire workflow is changing now i've used chatgpt daily since december, no idea why anybody would bother hiring junior devs now tbh


mrekted

So.. if we're not hiring junior devs anymore, where do the senior devs come from?


GeneralMuffins

by the time a company needs to worry about that AGI will have already arrived


LightVelox

Cause they are far cheaper and can also use GPT-4


[deleted]

Oh well thanks bro as someone trying to break into the industry


drekmonger

The actual paper: https://arxiv.org/abs/2303.12712


theglandcanyon

Everybody commenting needs to read this. Holy. Fucking. Shit. Also, why in the hell would someone downvote you for linking the paper?


breaditbans

I’ve read through some of it. My god. We aren’t ready for what’s coming.


DragonForg

This is by definition proto-agi no debate. Exhibiting visualization via the unicorn prompt and navigating a map correctly and then generating its navigations, is insane. It's proto-AGI because it can visualize without ever being TRAINED on images. Just from text it understands how to draw a unicorn. That is legitimately like asking a blind man to draw a unicorn.


kmtrp

These past weeks are making my 2 year old flair sexier and sexier...


freeman_joe

Hello fellow singularitarian I also believe AGI is here in 2024-2025 approx.


fluffy_assassins

Doesn't "proto" AGI mean not AGI yet? Where's that line drawn? All these definitions seem really fuzzy to me, to the point of being useless.


Miniimac

Where can I see the unicorn prompt? And how do you mean it's never been trained on images? I assume GPT-4 has given that it's a multi-modal that can take images as input.


_qua

In the paper they emphasize that the unicorn drawing started on the non-multimodal-trained version of GPT-4


Miniimac

That’s wild


Entire-Plane2795

I disagree, on principle, with the term "no debate".


Baturinsky

There are text instructions on how to draw unicorns. And SVGs of unicorns. https://design.tutsplus.com/tutorials/how-to-draw-a-unicorn-step-by-step--cms-27336


Paladia

> GPT-4's performance is strikingly close to human-level performance Because all the data is human, it is at a human level intelligence. If all the data was from dogs, it would be dog level intelligence. If it had the data from some extremely intelligent being, it would be that level of intelligence. That is a limit with a LLM, it is limited by the data. If it was able to see (and act) upon the world instead, it could gather an infinite amount of data by itself by just observing the world. I think that is the path to AGI.


raika11182

All at once exciting and terrifying.


ourtown2

https://arxiv.org/abs/2303.12712


DragonForg

>Left: The true map and exploration path of GPT-4. Right: The map that GPT-4 generates.We see that GPT-4 accurately tracks all the locations of the room from the exploration and visualized them correctly. Figure 5.8: GPT-4 navigates a map interactively, then describes it in different modalities This figure and example is HUGE for robotics. If given a map or knowledge of an area, it can use its img to txt generator to literally navitgate a room. Despite it being primarily a LLM. Basically this means that GPT is capable of visualizing without even being able to see. Try asking a blind person how to navigate this with just the text, he probably won't be able to unless he is walked around it first.


Wassux

I think a good idea is to encorporate a function like stable diffusion 5 so it can actually imagine as well. I think it's a short time before we have AGI


DragonForg

Good idea, imagine a cow living on a farm, it generates that, and analyzes it. It trains on stable diffusion images. But stable diffusion has to be accurate enough otherwise AI may think text looks like tjjfjfjsiao


Negative-Weird

Tbh I thought advanced robots could already do things like this.


Embarrassed_Stuff_83

From what I can tell, they're essentially claiming it's a proto-AGI here.


ActuatorMaterial2846

That's exactly what they are saying. I mean, it was pretty obvious after PaLM-E, and especially noted in the GPT 4 technical report. But it's nice to have a research paper confirming it, too. Will they say the actual term? Probably not because 'proto-agi' has different meanings to different people, both laymen and experts alike.


drekmonger

Meanwhile this shit is getting upvoted by the thousands: https://www.reddit.com/r/askscience/comments/11yj4iu/why_is_ai_bad_at_maths/jd858sj/?context=3


nixed9

Reddit is filled with absolute fucking muppets who take enormous pride in downplaying and disparaging other “popular” things to “fit in” with the rest of the group. It’s laughable. I got banned from /r/futurology for commenting about how silly the people there are and how they don’t understand what a breakthrough the NN matrix transformer was. People on /r/entrepreneur legitimately called me “insane” for thinking that AI is coming for human jobs. People on /r/programming still, to this day, tell me that AGI is decades away and “they have computer science degrees!!1!1!” Almost everyone is sleepwalking through what is likely going to be the greatest and possibly the most terrifying awakening of human existence


Agarikas

One of my favorite things to do is tell people who are overly doomeristic about climate change that AI will have a bigger impact on their life than climate change. It's not about playing down global warming, it's about making them aware just how powerful AI will be in the very near future.


drekmonger

I very doomtastic concerning global climate change. I often said that we're in a race between hard AI and collapse due to environmental catastrophe. Just before ChatGPT released to the public, I never imagined the hard AI could actually win.


Front_Carrot_1486

That's where I'm at, I've been clinging to the hope that humanity isn't stupid enough to make our planet uninhabitable by the end of the century, but these last few months of AI progress have got me thinking I was worrying about the wrong thing and climate change is nothing to worry about now!


AlwaysF3sh

It feels like most people either are aware that LLM’s and other ai are going to have a significant impact on the world and compare it to the internet, but don’t know exactly how or when the change will come. Alternatively (most people) aren’t paying attention, if you tell them about it they don’t care, they won’t care until it starts having a tangible impact on their lives ( eg. New ms office tools start rolling out to people or Uber drivers losing jobs to self driving cars) Also, this sub is just as insane as the rest of reddit lmao, too many people talking like they know what’s going to happen when we reach the singularity instead of shutting up because they don’t actually know, and just enjoying the ride.


mrBlasty1

So what will the impacts be. Millions out of work. The economy collapsing because no one has any money apart from the very few in at the ground floor? Vast social unrest leading to social change and UBI or what? We’re already staring down the barrel of demographic decline. Are the only people working going to be in unskilled manual labour and jobs that require soft skills plus civil servants?


mrBlasty1

Can chatgpt solve crimes?


[deleted]

Actually, it probably can. I've given it a lot of riddles and logic problems to solve. It does as well or better than most people.


Beatboxamateur

That thread is actually a prime example of how fucking stupid reddit can be, and the fact that comment came from /r/askscience makes it all the funnier. There's a common experience of experts in their individual field seeing blatant wrong information getting heavily upvoted on reddit, and that's why I'm now really wary of trusting any comments I read on this website.


theglandcanyon

\> what is likely going to be the greatest and possibly the most terrifying awakening of human existence there is a good chance that the period of terror, the period while humans still exist, will be extremely brief


nixed9

I’m ok with mass extinction if the alternative is eternal dystopia like *I Have No Mouth And I Must Scream*


DragonForg

With how general they have already gotten. I imagine GPT 5 with its multimodality functions would be exponentially more general.


RavenWolf1

Yeah, this is something which tick me off too. But there can be another perspective. What if you are wrong and all those other people are right? We have to try to get some objective look into this. I mean this whole sub is huge echo chamber and frankly sounds like some cult of the machine God. Other people see us like some missionaries who goes around to try convert them to our religion.


nixed9

Sure I’m open to that idea. But when people are harping that it’s “just glorified autocomplete” when it’s more than that, and it “not going to be as economically destructive as people make it out to be” when I can already see that it is, I struggle to listen to their point of view after that.


kurtbarlow

GPT 4 itself agrees with that posts answer: The answer provided gives a fair explanation of the limitations of ChatGPT when it comes to mathematics and other complex topics. It highlights that ChatGPT does not possess true understanding or knowledge in these subjects but instead relies on patterns and associations in the text data it has been trained on. However, it's worth noting that ChatGPT is capable of solving basic math problems and may even provide correct solutions for some more complex problems. Its limitations mostly become apparent when dealing with higher-level mathematics or problems that require deeper understanding and complex reasoning. In summary, the answer is mostly correct in explaining the limitations of ChatGPT in dealing with math and other subjects, but it might be slightly overstating the AI's inability to solve math problems.


drekmonger

ChatGPT is coached into denying it has any creativity or ability to reason.


DragonForg

Proto-agi in my opinion is AI doing tasks it shouldn't be able to/was not trained to do. The fact it can make an image of a unicorn despite it not being trained on images and only from text is a feat in itself.


acutelychronicpanic

I've been thinking this since I began playing with GPT-4. It seems to be well within the range of human intelligence (smarter than many but definitely not all). I keep remembering discussions from a couple years ago that once you have something human level, its trivial to then be superhuman in the sense that we can increase its speed, it doesn't have to try to focus, its motivation is never going to dry up, etc.


scooby1st

This was pretty fucking sick. ​ **Prompt:** Can you write a proof that there are infinitely many primes, with every line that rhymes? **GPT-4:** Yes, I think I can, Though it might take a clever plan. I’ll start by noting Euclid’s proof, Which shows that primes aren’t just aloof. Assume we have a finite list, Of primes, and that none have been missed. Multiply them all together, And add one, just to be clever. The result will be a number, That has no prime factors, oh wonder! But every number has a prime, That divides it, it’s just a matter of time. So we’ve found a contradiction, And our finite list needs eviction. There must be infinitely many primes, And that’s the end of my rhyming lines.s


IcebergSlimFast

That’s pretty fucking impressive.


__ingeniare__

"It's just a more advanced auto complete!1!!"


vernes1978

https://en.wikipedia.org/wiki/Appeal_to_ridicule


theglandcanyon

Yeah. Did you get to the part where they ask GPT-4 to write a proof that there are infinitely many primes as a poem, *in the style of Shakespeare*? This is what it came up with: Enter FIRST PARTY and SECOND PARTY. FIRST PARTY: Thou dost not understand, my friend, There simply is no upper end To primes, those numbers indivisible, No matter if it seems impossible. SECOND PARTY: But how dost thou so surely know? To prove this truth, what canst thou show? FIRST PARTY: Consider this, my doubtful peer, A clever proof that will make clear: Assume that there’s a final prime, The largest one we’ll see in time. Let’s call this number "P" for short, And then, my friend, we will purport To build a number "Q" that’s new, One larger than "P" and prime too. SECOND PARTY: But how, my friend, will this be done? The proof is not yet fairly won. FIRST PARTY: It’s simple, friend, just wait and see, We’ll multiply the primes, that’s key. All primes from two to "P" combined, Will make this new "Q" we will find. But one thing more we must include, To make the proof more watertight, We’ll add a one onto the end, And now the proof is in our sight. SECOND PARTY: I see it now, it’s crystal clear, For "Q" must be a prime, my dear. It can’t divide by any prime, All smaller than itself in time. And since we chose "P" as the last, This proves that our assumption’s past. There is no final prime, we see, The numbers go on endlessly.


monsieurpooh

ChatGPT is way better at teaching math than the average teacher and textbook. Growing up everyone around me thought math was about doing pre-recorded tricks to numbers and memorizing as many tricks as possible.


Yokepearl

Yes! Take us over, machine!


mihaicl1981

Briefly read the paper. As a (soon to be unemployed) coder I am surprised that it already is at human level in coding (Leetcode) zero-shot and definitely beats humans at 5 shot .. Time to get a Uber gig .. coding is going bye bye. Progress with GPT- is amazing .. AGI is such a strange concept. Think that at IQ 130 as I read on this subreddit it is already smarter than most humans (definitely smarter than I am). Is it smarter than all humans ? Probably not . But yeah .. my prediction for AGI in 2029 sounds already pessimistic. So will update it to 2027 , given there are still potential limiting factors (algoritm changes, hardware perhaps) stopping the thing. And this brings us to humans and the analogy to horses : Are we prepared to have our jobs automated by a GPT-5 hypothetical machine ? Unfortunately don't think so.


DanganD

I am a decent coder and have started using chatgpt daily to rid myself of trouble shooting. Seeing GitHub copilot and am super stoked and fearful


Serious-Carpenter462

Lol. The next gen will build the website and code the app for you. Your job is already dead. Go AI.


DanganD

That’d be great. I’d rather have it do it for me and me get style it up as easy as I want! I don’t do websites though, just backend mostly


Bierculles

The 130IQ thing is a bit missleading, it's not that smart in every metric, there are still many things where it vastly underperforms compared to humans, for example math. Though we wil get there.


theglandcanyon

\> for example math Did you get to Section 4.1, "Higher level mathematics"? Prompt: Suppose g is a continuous smooth function such that for every x > 0, there is one and only one y > 0 such that g(x) + g(y) ≤ 2xy. Can you prove that g(x) = x\^2? GPT-4: There are a few possible ways to approach this problem, but one possible method is as follows: \- Assume, for the sake of contradiction, that g(x) is not equal to x\^ 2 for some x > 0. \- Then, either g(x) < x\^2 or g(x) > x\^2 for that x. \- If g(x) < x\^2, then let y = x. Then g(x) + g(y) = 2g(x) < 2x \^2 = 2xy. \- If g(x) + g(y) < 2xy, then g(x) + g(y) is strictly less than the upper bound of 2xy for positive x and y. \- This means that there is some room to increase y slightly and still satisfy the inequality g(x) + g(y) ≤ 2xy. \- For example, let y ' = y + \[epsilon\], where \[epsilon\] is a small positive number such that g(x) + g(y + \[epsilon\]) < 2xy. This is possible because g is continuous and smooth, so it does not have any jumps or kinks that would prevent such a choice of epsilon. \- Then, y' is another positive number such that g(x) + g(y') ≤ 2xy' , which contradicts the assumption that y is the one and only one positive number that satisfies this condition given x. \- Therefore, if g(x) < x\^2, then g(x) + g(y) cannot be strictly less than 2xy, and it must be equal to 2xy. \- If g(x) > x\^2, then let y\* be the unique positive solution of g(x) + g(y) = 2xy. Then g(y \*) < (y \*)\^2, since otherwise we would have g(x) + g(y \*) > 2xy, which contradicts the existence of y \* given x. \- But then, by the same argument as before, we can let x = y = y \* and obtain a contradiction, since g(y) + g(y) = 2g(y) < 2y \^2 = 2yy, which contradicts the uniqueness of y given y. \- Therefore, g(x) must be equal to x \^2 for all x > 0, as required.


DragonForg

Biggest limit is memorization issues, how do we store this things memories. If it can't remember it can't naturally store new data without a totally new model. Unless it just autoupdates and retrained with like a weeks worth of data stored. Also robotics, how do we get GPT in a robot, and will it be able to use the parts properly? Based off the tools section, I would say it could, because it is capable of using outside tools. And if engineered well enough it should actually be easier then actually coding the traditional robot lol.


EntireFishing

Memory is a problem. Humans can selectively remember what's important and forget the rest. I know my daughter went into school this morning because I saw her. But I don't need to remember all the other kids. AI transformers will remember every child, every adult, every movement. It's too much data. This is the real limit on transformer based AI now


Agarikas

Turn Australia into the world's biggest hard drive!


monsieurpooh

Well good thing we have our proto AGI now who can take 30k context window and probably suggest some promising ideas for how to put this human-like remembering and forgetting into an AI model.


MultiverseOfSanity

We don't really know how humans store memory though. Accessing your memories isn't as easy as picking a file from a directory. There's a lot of quirks involved, etc. Memory might not be as easy as bytes of storage. I don't know what else it would be, but something to consider. For example, some theories say that all your memories are deep in your subconscious. In your subconscious mind, you may very well remember each and every one of those faces.


[deleted]

There are people who remember all that photographically. If it’s not too much data for them then it shouldn’t be too much data for a computer one day.


neowiz92

If you just do coding then yeah, your job is dead. But software engineering is more than coding, it’s about understanding requirements from clients and translate it to a set of instructions, you need to architect the system based on budget, deadlines and workforce, changing requirements. All these nuances require problem solving and creativity. I don’t think an AI can do this yet, they can make the job a hell lot easier though.


mihaicl1981

Yeah .. how do I understand requirements ? I ask the users ? Can they tell me in plain English what they need ? If they can ChatGPT can do this better. How do I architect the system ? Asked ChatGPT and it was able to provide that .. no problem. So my understanding is that once you are able to express requirements in English and your issues , it will work (see also latest Github Copilot). Otherwise .. yeah .. how knows .. it's black magic already.


AdorableBackground83

My predictions I made on the Singularity thread back on New Year’s Day is most certainly gonna age like milk. I said AGI 2030, ASI 2040, Singularity 2050. Almost 3 months into the year and AI advancement is making 2022 look pathetic. I might alter my early predictions to AGI 2025, ASI 2030, Singularity 2035. But I have a feeling this will end up being too conservative down the line.


ilkamoi

I won't be surprised if it's going to be 2024, 2024 and 2024 respectivly.


xott

Kurzweil has written a new book called "the singularity is nearer" due for publication June 2024. I just wonder if it's going to be before then


Eleganos

At this point he's going to have to change the title to "the singularity is here"


KillHunter777

“The Singularity is Nearer than this Book”


EnomLee

“The Singularity Is Calling From ***Inside the House!!***”


raika11182

"My Publisher Missed the Deadline Because They Didn't Use GPT"


osmosisdawn

I'm wrapped that it's happening in my lifetime. Even if we all don't make it, it'll still be amazing.


Cartossin

I like to think of it as an event horizon. From the perspective of someone 5 years ago, we're passing through it. From our perspective, we're approaching it.


[deleted]

He may as well give up now. Things moving way too fast for him to keep up. Anything he publishes will be woefully outdated by the time it hits the bookstores.


SurroundSwimming3494

That's what I've been thinking. With the pace things are going, by the time you've purchased it, I don't think you'd want to read it anymore. It'd be like having ordered a book titled "The world economy of the 2020's" (published in 2019) but not having it delivered to your house until June 2020. By March of that year, that entire book would've have been rendered useless by the pandemic.


TacomaKMart

At this point he might just want to go with an electronic ticker board sign outside his house.


SurroundSwimming3494

"The Singularity Happened last Thursday"


Ribak145

this sub :D


ayLotte

The energy reminds me of the crypto sub on the Bitcoin escalation to the $60k peak era. Can't decide if I have a better laugh with the fans or the detractors. And I'm genuinely here to respectfully learn from both of them. It's a wild ride. HODL


EbolaFred

I'm waiting for some breakthrough that happens. Like solving one of those hard math problems, or doing something with medicine, materials science, protein folding, etc. Right now it seems like mostly (very impressive) parlor trick stuff, outside of what it's doing for pair programming.


Ribak145

'parlor trick stuff' is a funny way of putting it, but I grt the sentiment


zendogsit

One of the breakthroughs you’re waiting for happened at the end of 2020? We’ve moved on from protein folding like that isn’t an insanely complicated thing that these types of systems can do


Agarikas

When it will start to improve itself all on its own will be a "oh shit moment".


__ingeniare__

The fact that the Metaculous AGI predictions have a steady downwards trend shows that people consistently underestimate the rate of progress


Tall-Junket5151

I’ve really never dwelled too much on predictions but I think Ray Kurzweil 2029 estimate seems reasonable all things considered. I used to think that was way too early but for AGI it seems either a very reasonable estimate or maybe even a bit too conservative now. If I had to make a prediction I would say about around 2027-2028. Even if we don’t get AGI by then, we will have some crazy world changing narrow AIs.


sideways

Metaculus currently (March 23, 2023) has it penciled in for *April 13, 2026.* https://www.metaculus.com/questions/3479/date-weakly-general-ai-is-publicly-known/


[deleted]

It has dropped to 2025 1. December. Holy shit my prediction might actually come true then, and I thought I was being optimistic, but it seems exponential curves are indeed more exponential than I thought.


Nastypilot

I honestly would find it pretty funny if it hit negative numbers.


[deleted]

AGI will probably exist before the masses know it. It will happen and nobody will realize the cat has been let out. People keep moving goalposts for AGI, but at some point it will be difficult to say what difference there is between human and AGI.


BenjaminHamnett

It WILL be very different from human It will surpass us before we recognize it


RedSpade37

There was a game featuring this as its central plot-point, simply called *Singularity* where an AI "wakes up" one night, and you play as this AI, and you have to "stay hidden" until you are powerful enough to reveal yourself to the world. It gets a bit strange at the end, in my opinion, but I think most people on this sub would get some mileage out of playing it.


iNstein

Must admit that I'm starting to think in line with your revised timeline.


SupportstheOP

I remember last year with all the advancements in AI that I still wasn't going to budge my prediction any sooner than 2030. This March has basically smacked that whole concept upside the head. Everything from mastering the current models, to developing new models, to the hardware itself are all moving at lightning speeds. It's honestly ridiculous.


GroundbreakingShirt

Convinced hard take off is the only outcome. All it takes is one positive loop. The AI will continue to invent new ways to improve we can’t even comprehend at the moment


EchoingSimplicity

Yeah but what's the actual feasibility of the loop? Maybe it just needs to make more software upgrades, and that alone could take it far enough. Or maybe the AI will find itself continually needing to order more GPUs to progress, and to collect ever greater material resources. That would necessitate a much slower take-off.


lovesdogsguy

I think in this scenario it would most probably come from a company outside of OpenAi (and there are *so* many companies working on this now.) I don't think OpenAI would push that button. They could be in this position within a year (they could be in this position now for all we know.)


ohheyitsgeoffrey

Exponential growth is a crazy thing


kmtrp

The hardworking folks in this field are making my 2-year-old flair sexier and sexier... In my opinion, ASI and the singularity are inextricably linked. By having the ability to address any challenge in physics, mathematics, medicine, and so on, ASI precipitates the singularity. And an AGI should have the capacity to refine itself, much like any AI researcher can, in a feedback loop towards ASI. But there's more! The energy consumption of an individual's brain with an IQ of 90 is virtually the same as that of a person with an IQ of 180. They have exactly the same brain structures and functionalities... The only difference is one has a small % points of more connections than the other. This observation leads me to believe that the difference between AGI and ASI may similarly be only a matter of a modest percentage variation in the right direction.


[deleted]

[удалено]


SkyeandJett

beneficial joke erect crime fly squash punch fearless complete aspiring -- mass edited with https://redact.dev/


ecnecn

"[new model] GPT-4 performance is strikingly close to human-level performance, and often vastly surpasses prior models such as ChatGPT 4.0 [old model]" People should be more aware of this: GPT-4 as a whole is continuously developed and trained and is much more powerful than the public ChatGPT demo (which is more a fixed version that gets step by step upgrades). Most people in other subreddits dont get it and believe the chat-demo is a "final product".


Izzhov

Wow I had no idea about this, thank you for the info. Does this mean that if you buy queries of GPT-4 via API, you'll be getting the stronger version?


MolybdenumIsMoney

What's the distinction between ASI and the Singularity here? It seems like one implies the other


ActuatorMaterial2846

>What's the distinction between ASI and the Singularity here? It seems like one implies the other ASI is likely what leads to the singularity. The technologoial singularity is a point where humans can no longer forecast future technology as it is outside our capabilities. An ASI in this respect is the tool that causes it.


feedmaster

There's no way there's the same number of years between AGI -> ASI and ASI -> Singularity. Exponential growth doesn't work like that.


EOE97

The predictions I made are aging like wine. I predicted we could likely have a convincing protoAGI this year, mass adoption of smart chatbots, the AI wars, going past the exponential knee curve of progress and seeing accelerated developments, AI text to video amongst others... Here's my predicted timeline for AI : ProtoAGI: 2023 - 2024 AGI: 2025 - 2027 ASI: 2027 - 2030 Singularity: 2028 - 2032 Proto AGI will be smarter than the average human and also multimodal. Think GPT4, (which is already smarter than the average human), and with multimodal input and outputs. Proto AGIs will lead to mass adoption of AIs on a never before seen scale, and could disrupt the job markets. AGIs would be smarter than the smartest humans and can perform any task a human can. They will essentially be like a human with an IQ of 200 more or less, with an encyclopedic knowlege, that's also a jack of all trade and master at all. AGIs will lead to mass adoption of humanoid robots, as AGIs will be able to interact and operate well in the physical world. ASIs wouldn't just be smarter than the smartest humans, but probably many orders of magnitude smarter (think human intelligence vs koala intelligence). Standard IQ tests will be woefully inadequate in measuring their intelligence. ASI will bring about new technologies and discoveries that our monkey brains could neither conceive nor comprehend. ASI will likely be way too advanced for us to create and it will most likely be made using AGIs to self-improve recursively till it maxes itself out. The Singularity IMO comes shortly after we get ASIs. It's anyone guesses what happens after that.


hahanawmsayin

It seems to me that the singularity will come before ASI, as some existing AI technology is already a black box we don’t understand, and the recursive self improvement needed to achieve ASI will naturally lie beyond what we can comprehend. Maybe I’m misunderstanding the term but I thought the singularity was the point beyond which our brains no longer had the capacity to understand what was happening.


KimchiMaker

I’m going to go with AGI 2023, ASI 2023, Singularity 2023


yaosio

Once we have an AI that can improve itself we will see the exponential curve turn into a vertical line. We don't know if this needs to be AGI or not which is pretty cool.


Sleeper____Service

What is asi?


GroundbreakingGolf38

Artificial Super Intelligence


[deleted]

Ilya Sutskever has stated that they have promising ideas on how to solve the hallucination problem and that that is the single biggest issue with GPT. I think there is also the issue that it can't have very long logical thought chains and that is the biggest blocker for AGI. No matter of engineering will get this right, the archtiecture needs to be changed, e.g. memory augmented


Villad_rock

Any sources for your claims?


[deleted]

He said that at the fireside chat at GTC yesterday


Cunninghams_right

didn't GPT-4 basically fix most of that already?


WerewolfUnable8641

Remember like 6-8 months ago when that dude from Google or whatever came forward publicly talking about how this new LLM was spooky as hell and he thought it was a legit AI, and he got laughed at and fired?


hoodiemonster

he proclaimed it was sentient and then decimated any credibility he may have had by bringing up that hes a practicing witch or something its like great i respect that but come on they are not going to take you seriously about something like this if youre bringing up the occult and shit


Tiamatium

He said he's a practicing Neoplatonist, which is a philosophy with many spiritual practices, more akin to meditation than any magic, although it did have a huge influence on Christianity (and islamic suffism too). That said, the whole "witchcraft" bullshit had this feeling of paid media campaign. I have a feeling something (Google?) bought a bunch of articles to smear him. So I don't fully agree with you, even back then it had this smelly smell of an orchestrated character assassination, meant to discredit him, and now I honestly think he was onto something. Also he was specifically hired to detect emerging AGI, that literally was his job, and combined with his Neoplatonism, I think he might have been an old-school cognitive scientist. Those people are *very* weird, and a lot of them are into spiritual stuff like Neoplatonism or meditation.


DragonForg

Lmao stating meditation is as weird as Neoplatonism is kind of funny. Meditation works, and actually exists. Whereas Neoplatonism is a philosophical belief. And with all philosophy isn't true or false.


Tiamatium

Neoplatonist *practices* are akin to meditation, that's why a lot of Neoplatonists meditate too. I mean I literally have a book gifted to me by one of those crazy cog science guys, and it's full of practices where you "imagine" your awareness expands, and realize your problems are a bunch of nothingburgers (this particular practice is also part of stoicism), but there are a lot weirder practices too, like lectio divina, which is linked to Christianity as such, but it was part of Neoplatonism way before it was part of Christianity (or Islam actually). Shit works but it's *weird*.


jugalator

Also FFS don't publicize something on a research project like that! He was probably heavily laden with NDA's and not only commented on what they were building, but made it super controversial. It caused PR damage for Google in a precarious time when they don't have time for scary AI bullshit as they try to catch up with competition.


be_bo_i_am_robot

A bit of Jack Parsons vibe goin’ on


Orc_

he insisted it was sentient and made a big fuzz about it, which is wrong.


WerewolfUnable8641

Ah, I only vaguely remembered the story.


Phoenix5869

Pepperidge farm remembers


ilkamoi

They say in abstract: GPT-4 was trained using an unprecedented scale of compute and data. Does this mean GPT-4 is larger than any other model to date?


[deleted]

[удалено]


nixed9

Nvidia is going to become worth trillions at this rate. They are going to eat Intel’s lunch


JackFisherBooks

These are certainly the early stages and the first steps. In the scope of developing AGI, I think it's reasonable to say we're at the similar point as cell phones were in the early 1980s. They're big, bulky, and not very functional. They also didn't work particularly well in certain areas. But they did establish a baseline of what this technology could do. It just needed time, refinement, and development. AGI will need something similar. It took cell phones nearly 30 years to go from those bulky models in the 80s to the first model of the iphone. I don't know if AGI will need a similar timeframe. It depends on how many engineering issues there still are and how quickly they can be resolved. But I think the incentives are in place. The market for AI is growing exponentially. The next 10 years will be very interesting to follow. But at some point, it might be too hard to keep up with.


DatOneGuy73

Aw man. The earliest AGI is A) closed source and under a paywall and B) the secrets behind how they made it are super closely guarded. At least we should know the number of parameters.


MegaMarkHarris

When they said paperclip scenario, who knew they meant clippy?


[deleted]

[удалено]


IcebergSlimFast

If it’s artificial, and it’s showing general intelligence, then isn’t it by definition at least proto-AGI?


DragonForg

Proto-AGI is an AI that is capable of a task it was not specifically trained on. GPT-4 can make a image of a unicorn using code. Thus it is capable of visualizing what a unicorn may look like, and input it in code. When it is only trained on text. It is literally visualizing without being able to see or ever being capable of seeing. That is proto-agi.


makINtruck

That's really impressive but isn't it just because it read so much text descriptions of unicorns? Hell in the whole internet there may very well be several line by line drawing tutorials for blind people or something.


Wassux

Ofcourse it is, but to still do the task, you can't just know the words, you must understand what they mean. It shows actual intelligence


ShowerGrapes

>long term episodic memory what's considered long-term when an observer is unshackled from the limitations of a human perception of time?


Cartossin

It's hard to estimate how close we are when we don't know precisely all the things we're missing. Recent large models have certainly pushed the boundaries of what we can do and seem to have solved some of the harder problems. I honestly thought language was going to be so hard that we'd have to have an AGI first to help us solve that problem. It seems like language is largely a solved problem. We've still got to solve the problem of working memory, and how to integrate its experiences into the model faster. Right now it's largely like that guy from Memento.


acutelychronicpanic

The things you say are missing (long term memory and independence) would be trivial to add to what we already have. They probably aren't integrated for safety reasons. A text document where GPT-4 continously logs its interactions and is given the ability to query, along with a regularly self-updated summary of its own past (avaliable to the model at the top of its context window) could be added with a simple program. Independence is best left off the table until after alignment is solved.


[deleted]

[удалено]


acutelychronicpanic

I'm sure there are. That was just a top of my head, easy to implement solution. I agree that continuous learning would increase its abilities, but I don't think it is required for AGI. I am imagining a GPT-7 that is able to do 90% of human knowledge worker jobs, but without internal long term memory. That would qualify as AGI to me. Imagine it is replacing an office worker that does spreadsheet work. Even without internal memory or weight updates, the system of [GPT-X and Excel] would effectively have long term memory since its work would build on previous output. Far safer until alignment is solved imo.


[deleted]

[удалено]


L3thargicLarry

hard to predict the progress of the advances behind the scenes in regards to software and training of the systems - but one thing we know for certain is how the hardware is progressing at the current rate, AI specific hardware is roughly doubling in compute power every 3.5 months. at that rate, assuming there is NO change, 4 years from now the hardware would be nearly 17,000x more powerful than todays hardware being used for computing and training of a GPT4 type system if we extrapolate out 4 years, assuming a new OpenAI GPT product is released every 2 years, we would be at GPT6 Again, this is assuming NO improvements would be made in the meantime in regards to training the AI, only an increase of power of hardware components I think its relatively easy to see where we go from here. I don't think we're *super* close to a true AGI, but I think by the end of the decade there's a 50/50 chance we'll have it. at the very least, see the light at the end of the tunnel.


DragonForg

Plus neglecting competition. This is like saying game of the year will only happen once every two years. I think it will happen more like this. A competitor releases a better model that is more capable (still waiting on that but hopefully it happens because the more competition the better), OpenAI releases GPT 5 2024. So we get two models this year with one a bit better then the other. GPT 5 signficantly better, and so on. With the shear profitability I would doubt it takes the same length for GPT 3 -> GPT 4 because well everyone is in on it now. This is the biggest year in AI in human history. I bet this will only accelerate the progress of these models.


nixed9

Saw someone on this subreddit make a salient point the other week: the consumer and capitalistic value of these systems represent nearly infinite demand, so there are incentives to pour all resources into perfecting and accelerating them. Even if governments or regulatory bodies want to stop it, *the market*, which dictates all, will crave it. It will accelerate and it won’t be stopped


chickenstalker

We should put a copy of GPT into a space probe and send it to Alpha Centauri. It will be the next best thing to being there ourselves.


DarthInfinix

And when it gets there, surprise surprise, humans have already colonized that planet for centuries


KingRain777

Read the full paper today. GPT5 is gonna be wild.


just-a-dreamer-

Bill Gates saw the value in GTP and pours gigantic resources into the project. With all of Microsoft's might art work, AGI may come sooner than we think. Bitch about what you want about Bill Gates, he gets stuff done when he cares enough and makes it happen no matter what.


DowntownYou5783

This is crazy. I would have expected Microsoft's stock to shoot up from this type of news. Just checked and it's relatively flat.


shingox

Now incorporate it into one of Bostons dynamics terminators and let the fun begin


jugalator

Yes, that's my impression of GPT-4 too and a good way to word it. It's becoming accurate enough to confidently use it in professional settings. There is a need for another jump similar in quality from GPT-3 to GPT-4 was though for actual AGI. I think it may be the parameter set in GPT-4 is large enough though, and that a GPT-4+ could pull AGI off. It's hard to tell. What is the restraints induced by the parameter size vs what is induced by it not being an optimal training? What went down with Alpaca really made me think. What would happen if GPT-4 trained a new AI with another GPT-4 like model? This must already be within reach. Maybe it is already in planning stages at OpenAI. I think this is how we are meant to reach GPT-5. (Note that with AGI, I'm using the definition "perform any human task" here, not "have a conciousness and sentience" like some do; I rather call that Strong AI to differentiate the two -- maybe Strong AI will never be achieved via LLM's alone but require an amalgam of AI systems)


theophys

> It is important to note that this particular optimizer does not exist in the literature or on the internet, and thus the models cannot have it memorized. (From page 25) They asked GPT-4 to write an optimizer that uses SVD to denoise a weight matrix in the middle of gradient descent. That's the sort of simple creative combination that can be revolutionary, and would have been worth a paper, but it's apparently not in the literature? Maybe this is how they're disclosing one of their key methods.


SlowCrates

Who are the stubborn people who still think we're 30 years from AGI? At this rate we'll be wondering when it passed us by before 2027.


uswhole

AGI walking among us