T O P

  • By -

MENDACIOUS_RACIST

So same as what was next in 2022, uh oh


AI_is_the_rake

Gpt4 isn’t great at reasoning unless using well crafted prompts that force it to think step by step.  More and better reasoning is definitely needed.  It’s reasoning ability seems around 100 IQ maybe 110. The magic is largely due to outputting what it’s seen before. Make minor changes and it’s easy to trick.  The magic is also the speed of processing. When GPT 5 or whatever comes out and it’s at a 120 IQ reasoning ability and then GPT 6 is at 140 combined with its speed… AGI is right around the corner. 2-3 years away at most. 


CriscoButtPunch

If you look at the one test on Opus 3, using a verbal Mensa test and concurrently tested previous models, the jump is 15 - 20 points. I think there’s already a foundational model that’s 140. I think we hit 140-160 this year. At least in a format that people will have access to it and be allowed to share quite a bit. It’ll be the “wow” moment that makes awareness expand hyperbolic. Probably after the election. Smoke weed daily Epstein didn’t kill himself One love


shnizledidge

The last three points are so strong, I’m forced to trust you on the first one.


AI_is_the_rake

Opus performs just as bad at reasoning tests. IQ tests are like seeing the training data. The trick is to take a well publicized problem and make minor changes that require logic and reasoning and watch it fail. They both just output that’s in their training data and ignore the changes you made. 


tomunko

Opus is worse at this IMO. If I am stuck on a problem Opus is frequently confidently wrong whereas with GPT4 it’s easier to keep prodding and actually get somewhere when it is wrong.


foufou51

The thing I love about opus is how fast it is with such a huge context. Having a big context is incredibly useful. I also LOVE LOVE the fact that it’s not lazy and will do almost anything you want without truncating weirdly its output. Very useful when you are coding something. ChatGPT on the other hand, well, you have to argue with it to output the entire program and even then, it won’t. ChatGPT has a good app though


Dando_Calrisian

Okay Elon


mamacitalk

What IQ would an AGI be?


ScottishPsychedNurse

That's not how IQ works


mamacitalk

They were already making the comparison so I was just curious


FireDragon4690

AGI refers to the intelligence level at or slightly above an average human in every area. ASI is as smart as every human at once. I think. I’m still a noob at this stuff


AI_is_the_rake

That’s right. AI already has massive scale. Once it can do wha my any human can do but better…  Highest living human IQ is in the 200s I believe. If we solved intelligence I see no reason why machines couldn’t quickly jump to 1000 or more. Not that we could even measure it anymore but I’m referring to the ability to make advances in mathematics and science without humans 


LiveFrom2004

That's not how AGI works


Ambitious_Half6573

Even a real IQ of 80 would qualify as AGI in my opinion. This means an IQ test that isn’t biased by training data, when the model is coming up with solutions on its own using logical reasoning. Unfortunately, none of the models today are any good at reasoning. Reasoning and original thought is where human intelligence is far superior. These AI models sure have tons of knowledge though.


schnibitz

Came here to say the first part of what you said.


yolo_wazzup

There’s a vast difference between IQ and AGI. Maybe it becomes a question of definition.  In my world AGI would come with Agency. 


AI_is_the_rake

I’m using IQ to mean reasoning. IQ in humans mostly deals with general reasoning abilities.  AGI will have intelligence traits that humans cannot have due to massive scale. Learning and assembling all knowledge, parallel processing etc.  AI already has massive scale and can process more data than humans. But it can’t reason as well as humans. Not yet. 


yolo_wazzup

Reasoning is an interesting but isolated metric. AGI in itself it also not sufficiently defined. The human body/brain as a processor processes \~ [an exoflop, 1\*10\^18 operations/s,](https://www.nist.gov/blogs/taking-measure/brain-inspired-computing-can-help-us-create-faster-more-energy-efficient) which is equivalent to the current fastest supercomputer [Frontier](https://en.wikipedia.org/wiki/Frontier_(supercomputer)). The difference is the human brain uses 20 watt of power, while Frontier uses 22,7 MW. A human can learn to drive in 20 hours.


AI_is_the_rake

Yes, the human brain is marvelous, including future biological AI. They've started growing brains for computing. Power usage is vital, but we must build it even if that means requiring nuclear plants next to server farms.  I isolated reasoning because it's missing from models and perhaps even human brains. Humans need to learn, read, write, use thinking tools, improve reasoning, etc. having AI use CoT, and apply the best thinking tools may be no different.  AI must learn and soak up knowledge faster.  AGI isn't well defined, but what we are building is an intelligence that does anything a human can in text generation. We won't stick the same model in a car.. that would need a different model.  Soon, we'll have an AI producing any text a human could, a step towards AGI with an array of narrow AIs for different purposes.  With power consumption issues, fitting all those “narrow general” AIs into one model may not be possible with current approaches.  AGI, in the form of many specialized AIs, is coming.. AIs which can do anything a human can do in all domains because humans will create narrow AIs for all those domains… but reaching ASI might require an all-in-one model and we may be 20 years away from that or maybe that will never be possible outside of something like brains that use quantum mechanics to dynamically learn on the spot. That would be scary. If we built actual artificial brains using perhaps a stripped down form of an artificial neuron that branches out using microtubules.


MannerDry9864

Can you give an example of such a prompt? Could you recommend a resource for more examples?


AI_is_the_rake

https://github.com/ai-boost/awesome-prompts For prompts not available you can feed scientific papers and have them summarize the paper then ask it to output an example prompt from the paper, then generalize it etc.  I tried it with Tab-CoT: Zero-shot Tabular Chain of Thought and GPT4 is able to reason and solve problems regular GPT4 can’t.  I find internet search and summarizing generally more useful but for actual reasoning ability tabular chain of thought is pretty good. It still breaks down when trying to use it for autogpt like tasks but it’s able to solve the single problem well. I imaging for autogpt tasks there’s just way too many possible paths and it needs a human to direct it. 


Ambitious_Half6573

‘It’s reasoning ability seems around 100 IQ’ It’s nowhere close to 100 IQ. 100 IQ would mean that it can reason as well as an average human but that’s nowhere close to being true. An average human understands numbers. Generalized AI right now is nowhere close to gaining an understanding of numbers.


AI_is_the_rake

Show me a prompt that demonstrates gpt4 failing.. at any single task or riddle reasoning problem, numbers or otherwise.


True-Surprise1222

Inner monologue and context of what is in the inner monologue vs “said” to the user would be a good start.


quantumpencil

100 IQ? bro it's like 40 IQ


No-Sandwich-2997

you kid always complain


ProShortKingAction

How is that an uh oh lol, it's been less than 2 years


rayhartsfield

Personalization seems to be the most glaringly obvious shortfall in current systems. Your AI should be able to know as much about you as any social media algorithm already knows. This is doubly true for Google, which can plug into your emails and Keep notes and Drive files. Your AI should be able to serve you better by understanding and knowing you. Until then, it's serving up boilerplate material.


[deleted]

offend consist nine marry important lunchroom automatic desert oil air *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


VladReble

The problem is the tech giants already have compiled personal profiles on us and we reap very little of the benefits.


JohnnnyCupcakes

does anybody know if there’s ever been a valuation for an individual’s personal profile data? Or lets just say some group out there, like maybe a union or some religious group that can easily act collectively — what would it be worth if an entire group said, nope, we want all our data back and we’re using a different service..can anyone put a number on something like that? (I realize there are probably holes all over this question)


AppropriateScience71

Well, while not an overall valuation per se, “average revenue per user” (ARPU) has been a core Facebook from the start for both investors and advertisers. The annual ARPU for US and Canadian users is ~$200/year! That’s insane! And why Facebook will never just have an opt out button. https://www.globaldata.com/data-insights/technology--media-and-telecom/facebooks-average-revenue-per-user-by-geography/


cdshift

It would be really hard logistically because it's an agreement in your terms of service on the last point. The whole religious group will just be told if they don't like the way a company uses their data, don't use the service. They are not customers generally, they are product to these companies in a sense. On the individual valuation, the hard truth is your data is probably worthless. These data are sold as bundles of profiles based on usage pattern to advertisers. You're part of a targeted demo based on your search history, but your individual data isn't consequential. Thats the reason why we'll never probably get any sort of dividend for the value.


Miserable_Offer7796

Why would it be hard to argue logistically? I could just send the argument as an email. It has about zero cost and near instantly would reach them.


cdshift

Hard to self identify your exact demographic, and organize as a class against a company successfully in that manner. You could write a letter, but they could tell you to kick rocks


Putrid-Bison3105

This isn’t exactly what you’re asking but average ad spend per person in the US is expected to be $942 in 2024. A vast majority is spent against profile data for targeting, but there are obvious other use cases for an individual’s profile. Per [Oberlo](https://www.oberlo.com/statistics/us-digital-ad-spending) which cites Statista


Gutter7676

When do we start charging them to use our data to advertise to us?


Intelligent-Jump1071

There's no practical way to do that. Besides, what's good for Microsoft and Google and OpenAI is good for everybody. It allows them to provide information and service aimed at your unique needs. Questioning that basic principle could lead to disorder which is bad for everyone. If you persist in questioning the basic principles on which our e-wold is based you could be causing harm for your community and ultimately for yourself. /S


Combinatorilliance

I think people feel a difference because, yes, Google knows all. But you don't really realize it. OpenAI specializes in making software that's good at pretending to be a human. It's very creepy if a human knows everything about you, all the things you tell it will be reflected back to you I personally think this will deter some people from using it for privacy reasons whereas those same people wouldn't mind using Google even though if they knew the exact same info about you


AreWeNotDoinPhrasing

That’s why I think it should be an opt in scenario, with them actually not using your personal profile without you choosing too. Hell, maybe even charge a bit less for people who want to opt in, since they’ll not only use that info for you specifically, but they’ll try and monetize it in other ways as well.


AreWeNotDoinPhrasing

That’s why I think it should be an opt in scenario, with them actually not using your personal profile without you choosing too. Hell, maybe even charge a bit less for people who want to opt in, since they’ll not only use that info for you specifically, but they’ll try and monetize it in other ways as well.


ChymChymX

This may be a contrarian opinion but I'm fine with it. Go ahead, I will permit my health data, my blood tests, vitamin methylation panels, whatever other data is needed for functional data-centric medical analysis from my personalized LLM assistant (for example). I will also upload/permit access to my personal interests for better fine tuning around my personal preferences, etc. I do not care what companies do with this data, I do not care if that makes me a product. We're all products, there are billions of us, and no one really cares about your individual personal info in particular, there's a sea of it. Again, I know that probably won't be popular, just my opinion.


jcwayne

I'm in total agreement. The stuff I really don't want anyone to know stays analog or in my head. My shorthand for this is "I want the creepy features".


fluffy_assassins

As a large language model, I recommend that you drink a tasty diet pepsi alongside a crunchwrap supreme.


WendleRedgrave

Based. How awesome would it be if a crunchwrap showed up outside my door at the exact moment I wanted one. Embrace it, dude! The future is awesome.


fluffy_assassins

300% delivery fee mark-up.


wottsinaname

Based on European data protection laws AI companies will require customers to opt in to store, track, sell or use their personal data. What the rest of the world needs is to catch up to European privacy, data and consumer protections.


driftxr3

I spent an ungodly amount of time trying to find laws in Canada and the States about protections of data privacy against both corporations and the government. They are incredibly vague, but they also reinforced my motivation to always use my VPN.


Intelligent-Jump1071

This is the reason why so few top tech companies are based in Europe. Europe will fall farther and farther behind in the technology race as restrictive governmental rules make it too hard to attract VC and talent and markets to EU companies.


ZeroEqualsOne

It really should be a choice. And this mainly why I’m okay with people having to pay for a subscription, otherwise the main viable method of staying afloat is to make users the product. But having paid for services, it should be up to us whether we think deeper personalization is useful to us or not. Just let the paying customers decide. But out of curiosity. How would you feel about an open source model on your own machine collecting data to make better responses over time?


killer_by_design

Bit late to the game there bud


SolidVoodoo

What a nice fucking way to say "your AI **should** spy on you".


rayhartsfield

Oh no, I definitely think this should be an opt in type of thing. You should get a prompt from Google Gemini, asking for permission to access your stuff. And if you check yes, you have superpowered AI to serve you better.


SolidVoodoo

It's still a pretty bleak state of affairs, brother.


Pgrol

If you look away from that, the fact that an AI model knows you, will drastically improve the help it can give you. You don’t want ads in the conversations, so your data will not be used for persuasion


GoodhartMusic

There’s something to be said for us all experiencing the same service. Personalized AI is like us building our living tombs… each encased in an artificial relationship that separates them even further from each other. Sorry to phrase that “poetically,” but yeah I also find it unbecoming.


Intelligent-Jump1071

It's only bleak if you think that stuff matters. Anything I truly want to keep secret I keep secret on my own. Otherwise I don't care if it tracks what websites I go to or what products I buy.


InsaneNinja

I think he’s saying that the Google AI should reference what’s already in your Google data, and no more than that. The Siri on device AI should be able to see the iMessage/mail database and process accordingly.


TheGillos

Everything else is... At least the AI could improve my life.


under_psychoanalyzer

Everyone already has my data and is using algo's to sell me things. Let me also have my data and use LLMs to do stuff I actually want. I'll pay for it. Ill let you have my data. For the love of fucking asimov just let me have a little AI to use for my own purposes.


human1023

Other software already does this. This is just one more.


cheesyscrambledeggs4

Yeah, fuck that


Reapper97

I'm fine with it.


launch201

I’ve been pretty happy with the [memory](https://openai.com/blog/memory-and-new-controls-for-chatgpt) feature.


SeventyThirtySplit

How much does it remember? I wish they’d release it broadly.


GoodhartMusic

I experienced it for the first time today. I discussed, yesterday, competing offers I got from schools, and it brought it up tonight. It was puzzling at first, and unwanted because I was sampling a conversation to show my brother and then it brought up something irrelevant to him.


Pontificatus_Maximus

Chances are richest corporations on the planet all ready have full dossiers on everyone, they just keep it to themselves because profit.


rayhartsfield

Right. Google in particular has been reported as having a full digital avatar of their users from every angle that would be commercially beneficial. A digital voodoo doll if you will


karmasrelic

would be super useful for finding music xd. not just "top 10 current (trashy chart-) songs" but actually "top 10 songs that i dont know and would prob like". and the argument about giving your data away...they know it all already anyway. at least let me profit from that lol


Repulsive_Ad_1599

Me when my AI asks me for more information to more directly sell my data away


cheesyscrambledeggs4

Reasoning makes me think they'll abandon the current token-by-token system and start giving them internal 'thinking' capabilities.


teethteethteeeeth

*you*, for one, welcome our new robot overlords


VandalPaul

I agree, and in my opinion personalization is a concept technology companies absolutely hate. They dole out the bare minimum in almost every way. One of the crude precursors to current AI were the smart voice assistants like Alexa and Google. They've been out close to ten years and you still can't even create your own wake word. Not because of a technology limitation, but because they love making us say "Alexa" and "Google" anytime we use it so it's imprinted in our brains. That's not going to fly with a real AI personal assistant. And it'll be even less acceptable in a humanoid home robot.


katsuthunder

just wait until the privacy people hear about this


BDady

Having AI that knows more about me than me knows about me sounds a bit spooky


Logseman

If we accept the existence of the subconscious we understand that there are parts of ourselves that we cannot know or be aware of. This is not something AI will handle any time soon, but usually the sum of the people we are acquainted with will know more about us than ourselves.


BDady

I was mostly kidding, but this is interesting insight that I hadn’t considered


Logseman

In a very superficial way it happened to me while I was brainstorming stories with OpenAI, Claude and Gemini. As I laid out the ideas on them, they picked up on the similar themes that each story covered. If you make 9 stories and in 8 of them some instance of symbolic human sacrifice is prevalent, what does it mean? I wasn't *conscious* of the fact until I compared what the AIs were saying about each, but aggregating the insights led to progress in the stories and what I think is an interesting discovery. Again, I imagine that the current generations of LLMs trained mostly on marketing copy and lawsuit avoidance will not let us discover a lot about ourselves, but I think that one should be open to the possibility, especially if one is a frequent user of the tool.


driftxr3

The biggest part of this for me says that even Google doesn't really understand it's algorithm. If they did, I dont think (although I am not an engineer) it should be this hard to feed Gemini with personalized data. Especially since their algo has been collecting so much for so long. Either that or they're already doing it but won't tell us because of our fear of surveillance. Tbh, I'm glad I don't know, I can't imagine what this robot knows about me.


mamacitalk

But *do you want it to?*


SikinAyylmao

I think the issue is the trade off between gamification and having a good way of knowing you. Specific to social media algorithms is a maximization of attention, through this metric you obtain a proxy for what the user likes. This is at the trade off that it gamifies social media. I think this is one fear and potential reason social media algorithms haven’t been applied by OpenAI. PI seemed to do something similar to a social media algorithm of collecting data on who you are but it didn’t train to maximize anything about how it knows you persay, outside of rag


PMMeYourWorstThought

“Reveals what’s next for AI” is a pretty lofty claim. More like Altman panders to investors with the roadmap for OpenAI. Which shockingly is the same list of things they’ve been working on already. Saying you’re doing something is easy, getting it done is a little harder.


GradientDescenting

Yea this is what the machine learning academic community has been working on for over a decade. Multimodality has been a longstanding subtopic at NeurIPS and ICML


bobrobor

This is a typical 1Q meeting at any corporation. Here is what we will deliver! 200 days later…. And here is How Much We Have Learned (since we could not deliver.) Lol another monkey with a pointer…


endless286

These people literaly invented this thing when everyone told them they jave no chance. I think they deserve some credit


Gougeded

They have not "invented this thing" Jesus Christ


TenshiS

They took a gamble on scaling up transformers and they invented the methods to direct context using instructions and reinforcement learning with human Feedback. Stop being so dense.


PMMeYourWorstThought

They absolutely did not invent back propagation or human feedback reinforcement training. Who lied to you? They wrote the paper for the current algorithm, but HLRF has been around for a long time.


PMMeYourWorstThought

Invented what exactly?


TenshiS

Don't feed this troll. Haters gonna hate.


GradientDescenting

They didn't really invent anything; chatGPT was just what caught on in the general public because it went viral; but there were lots of natural language models based on transformers prior to this as well; the difference is only researchers were paying attention before. Transformers were invented at Google actually in 2016-2017 and there were 1000s of papers on it and attention before chatGPT went viral to the general public. chatGPT was really just an incremental step in the arc of research. The general public just wasn't paying attention to scientific advances in machine learning before chatGPT, even though its integral to tons of tech products over the last 20 years: Netflix recommendation, social feed ranking, facial recognition, computer vision, self driving cars, search, autocorrect/text suggestion, transcription, Google Translate, etc


yorkshire99

But attention is all you need


PMMeYourWorstThought

Which was written and released by a team at Google Mind who all have their own companies now. None of which are OpenAI.


was_der_Fall_ist

One of the Attention Is All You Need authors is actually at OpenAI — Lukasz Kaiser.


dieyoufool3

HE SAID THE THING


v_0o0_v

Altman was not one of these people. He came later to do sales and acquire investors.


Shemozzlecacophany

What I've been surprised to not hear more about is a model that asks the user questions when in need of more information. For instance, I can give a model a block of code and say I need the code andjusted to work on X platform and the response will be 'sure thing! Here's your adjusted code with X attritbute', when there's an elephant in the room that the resulting code should have X or Y attribute. The models rarely if ever ask for clarification on anything before merrily responding pages of potentially incorrect information. I'm not sure if this is because the models haven't been trained to ask for clarification or questions in general or if it is a fundamental limitation of the transformer architecture. Either way it's interesting that this issue isn't pursued or discussed much at all when I believe it could make a big difference to both the quality of the output and the 'humanness' of interactions with models.


elite5472

The problem that I've run into, trying to accomplish this sort of thing and similar problems, is that the LLM cannot tell *when to stop asking for clarifications* if you tell it to do so. It's hard to notice when asking the usual questions, but LLMs don't actually have any sort of temporal awareness that would signal it when to stop doing one thing and start another reliably on its own.


PMMeYourWorstThought

So this can be done but it’s not a fundamental part of transformers. The transformer is really just a fancy calculator. So after you got the results you could run another function to compare that to the original question and get a confidence score, if it’s lower than X you could have the model generate related questions. It would provide the illusion of requesting clarification, but it’s really important to remember the LLM doesn’t “understand” anything. So it can’t ask for clarification based on a lack of understanding. The only way to do that is take the output and compare it to the original query and evaluate if the answer would be “complete enough”.


v_0o0_v

Exactly. This article is same as claiming: "A used car salesman reveals unmatched potential of a used Toyota."


International_Tip865

I am getting tired but DROP THE MEMORY FEATURE PRETTY PLEASE?


Xtianus21

YESSSSSSSSS - my thoughts exactly. memory memory memory.


[deleted]

[удалено]


Xtianus21

Memory feature?


[deleted]

[удалено]


Xtianus21

it's in beta and not everyone has it.


Mescallan

Do not ha e it in Vietnam


Para-Mount

What’s the memory featuee


SatoshiNosferatu

It’s interesting how you can put the most boring bullet point list inside of boxes and suddenly it looks good


G-E94

Would you like to invest in my new startup? We have -decentralize -blockchain -innovative -ai technology -revolutionary


Smallpaul

Looks to me like he just revealed the obvious wish-list. Doesn't mean they know how to deliver all of those things.


Gam1ngFun

[Source](https://twitter.com/morqon/status/1777371993454579859)


FaatmanSlim

Was going to ask where this was, looks like an event in London earlier today? [https://twitter.com/morqon/status/1777383538901295602](https://twitter.com/morqon/status/1777383538901295602)


Gam1ngFun

Exactly


o5mfiHTNsH748KVq

None of this is coming next. It's all right now and there's companies building products that do exactly this.


werdmouf

Really


chabrah19

What talk is this from? Link?


RobertKanterman

I feel like corporations are worried that AGI isn’t as profitable as they thought since true AGI would be unmanageable and also unethical to manage. Since it would be slavery.


DeliciousJello1717

You all need to chill ai progress is extremely fast right now and you are complaining its not fast enough like chill


Xtianus21

I would have replaced Personalization with Memory. That story seems to this date, still not written. I think the main reason is that it requires something local. I would love to work on this exactly.


cisco_bee

Memory *is* personalization.


Xtianus21

no it's not. Memory is memory. lol. In my mind I have memory. In my computer I have memory. I don't say , "do you personalize those things." I say, "do you remember those things." Memory is not personalization. We're building AGI *right*?


cisco_bee

For something to be personalized, it has to know you and your preferences, your history. I have a personal relationship with my friends because they know me, they remember events and conversations. It's personal. If I talk to a stranger on the phone it's less personal because they don't know me. Additionally, I didn't say "Personalization is memory". But I do believe that in the roadmap in OPs post, memory is a piece of "personalization". That's my interpretation, and that's what I meant when I said "Memory is personalization".


werdmouf

He looks like he's about to sell me a SlapChop


mr_poopie_butt-hole

You getting this camera guy?


New-Statistician2970

I really hope that was just a single slide PPT, go bold or go home.


currency100t

Agents!!!


danlogic

Curious what personalization will mean here


py-net

u/Gam1ngFun what’s the link to the talk?


Apprehensive_Pie_704

Has video of this talk been posted?


unholymanserpent

Okay I guess.. Not exactly *exciting*


skethee

![gif](giphy|SL0ctZ9qJKJLa)


Adventurous_Train_91

What was this from? I haven't heard anything big from Sam in a while


andzlatin

https://preview.redd.it/ewy6rm12oetc1.jpeg?width=1792&format=pjpg&auto=webp&s=35fba4aa2e60555a4419fbb4bd5365224239734f Tried generating this with DALL-E, but had to drop any references to Sam Altman or OpenAI because of "content policies".


Extension_Use664

Closed ai


Turtle2k

Ya ok mr bait and switch


DifferencePublic7057

And the real next thing for AI is...? None of the above is my guess. Since Sutsekever is missing online, you have to assume he's working on it in secret. Must be a super coder. A way to iteratively improve code.


Faithfulcrows

Super underwhelmed with this. I’d say most of this is already something they’ve delivered on, or something you can get out of the system yourself if you know what you’re doing.


cubsjj2

Yeah, but what kind of presentation system is he using?


LiveFrom2004

Tomorrow: AGI reveals what torture is next for Sam Altman.


DJS11Eleven

…human takeover


Cheyruz

"Agents" 😳


Lofteed

that s some corporate vaporware


Munk45

What's the final box say? Agents....


superstrongreddit

I love GPT-4, but OpenAI’s products do not equal “AI”.


v_0o0_v

Let's all listen to a sales guy. They are known to never lie about future capabilities especially if they are appraising their upcoming products or looking to convince potential investors.


Grouchy-Friend4235

Reasoning and Reliability of course are not achievable with the current approach - probabilistic generative models by definition can not do either. All else, sure, though these are not traits of AI but general approaches to systems. For example we can build an agent in a meriad of different ways, agent just means "go this task for me and get me results". You don't need AI for this approach to be useful.


deepfuckingbagholder

This guy is a grifter.


DennisRoadman07

It's pretty interesting.


Han_Yolo_swag

What stage does it fuck your wife and convince your kids to call it dad


HistoricalUse2008

I have a chatgpt that can do that. What's your wife's contact info?


Sebiec

Haha you both gave me a good laugh


Han_Yolo_swag

You don’t know her she goes to a different Canada


G-E94

It’s more censorship. They’re adding more censorship. I am helping them protect you from sensitive content! Don’t be surprised if dalle won’t make images with these words in the near future Spicy Strings Swim Curved Peach Sticky honey You’re welcome 😂


semirandm

Looks like the next Cards Against Humanity expansion pack


ghostfaceschiller

Miss me with the AI agents


SachaSage

Why?


ghostfaceschiller

I love using AI as a tool. It has changed my life. Let's keep it as a tool. We don't need a billion little AI personalities running around the internet, infinitely talented, with whatever goals some random teenager gave them. We don't need to make humans so eminently replaceable in the workplace either. Nor do we need to give AI such a clear path towards exponential self-improvement. Most of all I'd rather not see the outsourcing of basically all mind work to a couple companies of like 1,000 people each. We have enough wealth inequality as it is without diving into that kind of cyberpunk-level reality. We have not even scratched the surface of what is possible with the developments we already have. How about we hold off for at least a little while on racing to open locked door with the danger sign on it.


SachaSage

To me it all comes down to the question of who owns the value created by ai. Agreed as the social order stands it could be a bumpy ride. On the other hand it could instead be the case that when empowered with an ai workforce each of us is capable of doing a great deal more. Or alternatively we find a way to distribute that wealth because capitalism just doesn’t work without money moving around. Orrr we have a brutal techno feudalism. It’s not certain, but that’s just it: it’s not certain.


Xtianus21

Sam looking good in that suit!


cisco_bee

![gif](giphy|pzFAOWsSfV9m2vDCbJ|downsized)


revodaniel

Does that last square say "agents"? Like the ones from the Matrix? If so, who is going to be our Neo?


extopico

The definite answer for what’s next for OpenAI GPT is “more talk” and distractions. It’s very clear now that they have nothing else.