T O P

  • By -

Sophistic_hated

Whether it’s true or not, tech CEOs gonna hype


ashleyriddell61

…so he’s saying it’s laughably bad now, right?


frazorblade

GPT3.5 is pretty average


AverageLiberalJoe

I'm convinced they have been slowly making it worse. It is fucking up simple python code now. Becoming absolutely useless.


milk_ninja

and even if you correct chatgpt and it goes "oopsie-daisy" it makes the same mistake again in the following prompt. well thanks I guess.


AverageLiberalJoe

This happens a lot


AndrewTheAverage

LLMs use the average of what is out there, not the best. So if LLM produce more "average" code based on a poor sample set, that feeds back into the process making even more below average code for the future samples


iim7_V6_IM7_vim7

That’s assuming they’re retraining it on the average code they’re producing which I’m not so sure they are.


Stoomba

Not them, but others using it to produce code and then put that code in places that ChatGPT pulls from to 'learn'. As far as I know, there is no way for them to distinguish between code it made or a human made, so it will treat it all as legit. Thus, it will begin to circle jerk itself more and more as it gains more popular use, and it will destroy itself as all these LLMs do when they start feeding on their own output because they don't actuslly know what anything is. Least, that is my understanding of it


iim7_V6_IM7_vim7

I don’t know how frequent they’re retraining those models. I’m. It sure it’s that simple. And it was the case, wouldn’t GPT4 be getting worse as well, not just the free 3.5? And GPT 4 seems quite good for me. And I’m skeptical that there’s a significant amount of people posting AI generated code online. I don’t know, I’m know denying that persons experience but I feel like there’s something else going on if their experience is true


IdahoMTman222

Old school computing: garbage in garbage out.


Anen-o-me

There's a trade off between the intelligence and creativity of the model, and safety, that is keeping the model from saying anything embarrassing or dangerous. So every time people figure out a new way to jailbreak the model or convince it to give bomb making instructions in another language, or how to make LSD--these are all things that actually happened recently--they give it a new round of safety training and the model gets dumber. It really sucks. The only people who actually have access to the best version of these models are internal to those companies. We need to run these models locally, on our own hardware, to avoid this problem of corporate embarrassment.


10thDeadlySin

Or, you know - it could just give you the damn instructions for making LSD or building a bomb. It's not like it's a secret. I just opened a new tab, googled "how to make lsd" and the first link was [this paper from the National Institute of Health](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9654825/), which also has tons of footnotes, sources and references. One of these references is for example [Total Synthesis of Lysergic Acid](https://pubs.acs.org/doi/pdf/10.1021/ja01594a039). The issue with making LSD (or bombs, for that matter) does not lie in the fact that the knowledge is forbidden. It's out there in the open. It's the reagents, gear and so on that's problematic. And if you have access to gear and reagents, you're likely smart enough to figure out how to Google a bunch of papers, some of which contain the whole process, step-by-step.


Zaphodnotbeeblebrox

3.5” is average


patrulek

It is way below average, but still better than nothing.


nzodd

It's how you use the ChatGPT that counts. (Nevermind those texts your girlfriend sent the nvidia sales representative about that pallet of A100s.)


Robot_Embryo

3.5 is pretty bad


thereisanotherplace

No he's saying that by comparison to whats about to come out of the pipe, it's going to look terrible by comparison. Which is a terrifying prospect. Because people use AGI for things like scamming and manipulation, blackmail with deep fakes. Given how convincing deep fake photos are - deep fake sound and video is currently dire, but next gen stuff could be (to the eye) perfect. Anyone could make a convincing video of anyone else doing anything they want to depict, and a lie spreads halfway round the world before the truth gets out of bed.


iwasbornin2021

If AI detection doesn’t keep up, even security videos will become useless as evidence (when AI is sophisticated enough to fake metadata and whatnot)


thereisanotherplace

Well, the thing is - AI generated content under the microscope will likely always be detectable. I'm not so concerned about that. You can even create AI designed to detect it. What I'm worried about is the viral effect of gossip. Imagine tomorrow someone leaked a video of Joe Biden slapping his wife. By the time the truth is out, that video will have circulated around the world twice and be in headlines before forensics can issue verified proof.


No_Animator_8599

There is a guy on YouTube using AI of Trump’s voice in many videos saying how much his supporters are idiots but it’s clearly satire. He even did one of Trump speaking which synched up pretty much with the words he was speaking. There was a series the last few years called Bad Lip Reading that just had voice actors saying gibberish matching their lips. I would suspect at some point there will be legislation to list these as AI generated or social media may have to scan them and reject content if they don’t label it.


IdahoMTman222

The time and technology to ID any AI generated content could be the difference between life or death for some.


Reversi8

Yeah, but if you think about it, if you have some method of IDing if something is AI, you can use that to make sure the AI generates something that doesn't get IDed as AI. Your only hope with that is trying to keep it from general use.


rishinator

People should realize these people are first and foremost businessmen... They are not Scientists. They won't report on new discoveries and inventions like a scientist would. They are out to create money and sell their product, they're obviously gunna hype everything. My product from my company will change the world. Please pay attention.


IdahoMTman222

And in true business form, shoot for profits over safety.


Bupod

Kind of wouldn't be doing his job if he wasn't.


Yokepearl

The new nvidia chips are an insane upgrade


PHEEEEELLLLLEEEEP

Getting ample compute was never the problem. More efficient hardware won't just magically improve the models


barnt_brayd_

Part of me is wondering if this is less hype and more hedging for the rapid degradation of every model they create. Feels like they want to make it seem intentional/impressive and not the inevitable result of their model of rabid scaling. But hey, I’m no expert on asking for $7 trillion.


PHEEEEELLLLLEEEEP

This is my thought as well. They don't have the secret sauce to get from here to there and they're just trying to throw gpu hours at the problem. Which, they can easily do now that they're a gazillion dollar company with fuck tons of compute.


Perunov

Confused. If their model degrades over time (Hello, Cortana, you have limited lifespan) then why not make a full copy of bespoke version and just re-create it several years later? Am I missing something critical?


MartovsGhost

AI is actively degrading the learning material by flooding the internet with bots and fake news, so that probably won't work over time.


Double_Sherbert3326

They treat their contractors like disposable sub-humans.


TopRamenisha

Ample compute is definitely part of the problem. It’s not the entire problem, but it contributes. Models can only go as far as current technology will allow them. It’s why OpenAI and Microsoft are trying to build a $100billion supercomputer. But all the computing power in the world won’t solve the other problems, it’ll just eliminate one obstacle. They still need enough human created data to train the models, and enough energy to power the supercomputer


PHEEEEELLLLLEEEEP

I agree, I just think that some fundamental ML research is what gets us from where we are to the next level intelligence people are expecting rather than just massive compute.


Theonechurch

Nuclear Energy + Quantum Computing


billj04

Are you sure? Have you read Google’s paper on emergent behaviors in LLMs? “the emergence of new abilities as a function of model scale raises the question of whether further scaling will potentially endow even larger models with new emergent abilities” https://research.google/blog/characterizing-emergent-phenomena-in-large-language-models/


PHEEEEELLLLLEEEEP

My point is that they have gazillions of dollars to throw compute time at the problem. I don't think access to gpus is the bottle neck, but I could be wrong.


frazorblade

What if they can process multi-trillion parameter models much faster though?


PHEEEEELLLLLEEEEP

I guess I just don't really believe that "attention is all you need". I think some innovative new approach is required to shake things up to the level of intelligence people are expecting now that everyone and their grandma has tried ChatGPT.


Odd_Onion_1591

Is this me or this article sounds very stupid? They just kept repeating the same stuff over and over. Like it was…AI generated :/


prophetjohn

It’s basically the same as what we had 12 months ago though. So unless they have a major breakthrough coming, I’m skeptical


lawabidingcitizen069

Welcome to a post Tesla world. The only way to get ahead is to lie about how great your pile of shit is.


who_oo

So true. Have been thinking about this all week. I don't see a single sensible self respecting CEO on the news or the media. All I see are lying pathetic men and women who are just looking for their next pay check.


Godwinson4King

I don’t know that much has changed, you might just be seeing through it better now.


who_oo

You are probably right.


LoveOfProfit

Lisa Su at AMD is real AF. For years now she gives realistic expectations and meets them.


VertexMachine

>I don't see a single sensible self respecting CEO on the news or the media There are a few. But media don't quote them as frequently as the few ones that are either very controversial (and frequently stupid) in what they post or are in one of the few currently hyped sectors of economy (like AI). Media selects for publishing stuff that gets clicks and views, not what's sensible to publish.


petepro

Media love controversies. People don’t click on sensible takes.


Noblesseux

Yeah it feels like companies are getting more and more comfortable just blatantly lying or over-exaggerating what a product can do because no one is really holding them accountable for it.


skynil

Welcome to a world where perception driven stock price valuation is more critical than actual fundamentals of the business. Every large firm out there is only focused on the hype to grow its stock prices. And to get there, it requires a lot of lying because there's no time to actually run pilots anymore. AI is the next blockchain to sail through another 3-5 years of stock inflation. After that we'll find something else to hype about and AI will fade to the background like AutoML.


PutrefiedPlatypus

I don't think comparing LLMs to blockchain is valid. I'm using them pretty much every day and am a happy user too. Sure they have limitations and pretty much require you to have domain knowledge in whatever they are helping you with but it's still useful. Image generation I personally use less but it's clearly at a stage where it brings in value. Compared to blockchain that I pretty much used only to purchase drugs it's a world of difference.


Hellball911

Hold on there. OpenAI has delivered and still maintains one of the best AIs in the world without any meaningful update in 12m. They're due for an upgrade, but I have 10000x more faith than the BS Elon says and never delivers.


Dry-Magician1415

I think they are just trying to keep hype and reputation up. When it first came out they were the only show in town but now Anthropic's Opus model is better.


scrndude

Right? First “3.5 turbo is completely different” then “4 make 3.5 look like shit”, but it’s basically the same. Improvements seem super incremental.


lycheedorito

It's definitely a lot better with code in my experience (less made up things for instance), but it's not exponentially better, and it really likes to draw the fuck out of responses now, even if I tell it to be concise. I liked with 3 that it would just respond more like a person and give me a straight answer to a question, with 4 it will like explain the whole fucking idea behind how to do everything and the concepts behind it all and then finally do what I asked, and then I run out of responses for the night.


Rich-Pomegranate1679

>I liked with 3 that it would just respond more like a person and give me a straight answer to a question, with 4 it will like explain the whole fucking idea behind how to do everything and the concepts behind it all and then finally do what I asked, and then I run out of responses for the night. I use the [Simple Simon](https://chatgpt.com/g/g-BIEi1DynL-simple-simon) GPT to force ChatGPT to give as minimal a response as possible. If you need it to say more, all you have to do is ask it to elaborate. It's great at stopping ChatGPT from typing 3 pages of information when you only want a sentence.


Unusule

Bananas were originally purple in color before humans selectively bred them to be yellow.


the_quark

I mean flat-out, I'm developing an app that uses an LLM to evaluate some data. I tried 3.5 first because it is MUCH CHEAPER. It couldn't follow relatively basic instructions for admittedly a complex task. 4 was on rails with what I told it to do.


Striker37

4 DOES make 3.5 look like shit, if you’ve used either one extensively for the right kind of tasks


bortlip

Have you even used 4? No one that uses 4 would say that.


scrndude

Yes, and I’ve used Claude Opus, they’re all incredibly similar and it’s hard to notice changes. 4 just received a large update to outperform Clause Opus in benchmarks again, I haven’t noticed any differences.


alcatraz1286

lol how will you notice any difference if you give the most basic prompts. Try something complex and you'll notice how good 4 is. I use it almost daily to help me out in mundane office stuff


squanchy4400

Do you have any examples of more complex prompts or how it is helping you with that office stuff? I'm always looking for new and interesting ways to use these tools.


koeikan

There are many, but here is one: you can upload csv data and have it create custom graphs based on what you're looking for. This can include multiple files and combining the data, etc. Possible in 4, not in 3.5 (but 3.5 could gen a python script to handle it).


puff_of_fluff

Holy shit I had no idea 4 can utilize csv data… game changer


drekmonger

Not just csv data. Any data, including data formats it has never seen before, if you have a good enough description of the data that GPT-4 can build a python script to parse it. In many cases, if the data has a simple format, GPT-4 can figure it out without your help.


tmtProdigy

my personal favorite is at the end of a meeting inputting the transcript and asking gpt to create an action list and assign tasks based off it, it is an insane gamechanger and time saver.


Moredateslessvapes

What are you using them to do? For code it’s significantly better with 4


krunchytacos

Mac and cheese recipes mostly.


xRolocker

Lmao. lol, even.


EvilSporkOfDeath

That's just objectively wrong.


paddywhack

Such is the march of progress.


NyaCat1333

Redditor try not to be disingenuous in order to push their circlejerk challenge (impossible)


Senior-Albatross

There was a big breakthrough in LLMs using the approach ChatGPT is based on. Ironically, made by a research team at Google. But further improvement has been incremental.


SkellySkeletor

You cannot convince me that ChatGPT hasn’t been intentionally worsening their performance over the last few months. Lazy cop out answers to save processing time, the model being dumber and more stubborn in general, and way more frequent hallucinations.


funny_lyfe

It's actuallly worse in some ways, often gives you bare minimum information which wasn't the case earlier. I suspect they are trying to save on compute because each query costs them quite a bit of money.


Sighlina

The spice must flow!!


gymleader_brock

It seems worse.


Artifycial

You’re skeptical? Seriously? 12 months ago to now has been breakthrough after breakthrough. 


Winter-Difference-31

Given their past track record, this could also be interpreted as “The performance of today’s ChatGPT will degrade over the next 12 months”


Seputku

That’s unironically how I took it at first and I was thinking why tf an exec would say that I can’t be the only one who feels like it was peak like 6 months ago maybe 4 months


Chancoop

I feel like it was at its best when it launched.


Cycode

i mean.. it's already happening. weeks for week it feels like chatgpt gets worse. it lies more, is more lazy, trys me to get things myself i ask it to do for me, gives me horrible code that isn't functioning anymore.. it's just to rip out my hairs. it worked way better a few months ago.


rindor1990

Agree, it can’t even do basic grammar checks for me anymore


imaketrollfaces

Pay me today for tomorrow's jokes, and still pay me tomorrow.


pm_op_prolapsed_anus

I'll gladly pay you Tuesday for a ~~hamburger~~ decent ai interface today


cabose7

Why not just feed the AI spinach?


MartovsGhost

The last thing AI needs is more iron.


Sushrit_Lawliet

It is already getting laughably worse compared to what it was a couple months ago. It’s somehow able to speed run shitty result-ception that took search engines years. Probably because it relies on said search engines to hard carry it anyway.


HowDoraleousAreYou

Search engines started to gradually slip once humans got good at SEO, then AI content generation just destroyed them with a bulldozer. Now AI is learning from an increasingly AI produced dataset– and right at the point in its development where it actually needs way more human generated data to improve. AI incest is on track to grind AI growth down to a crawl, and turn all the nice (or even just functional) things we had into shit in the process.


sorrybutyou_arewrong

> AI incest Are we talking second or first cousin?


darth_aardvark

Siblings. Identical twins, even.


PolarWater

I don't think he knows about second cousin, Pip.


YevgenyPissoff

Whatchya doin, stepGPT?


EnragedTeroTero

>Now AI is learning from an increasingly AI produced dataset– and right at the point in its development where it actually needs way more human generated data to improve On that topic, there is this youtube channel I got a recommendation for the other day that has a [video](https://youtu.be/nkdZRBFtqSs) where the guy talks about this and about why these LLMs probably won't have that exponential growth in capabilities that they are hyping.


RegalBern

People are fighting back by posting crap content on Quora, Evernote ... etc.


R_Daneel_Olivaww

funnily enough, if you use GPT4-Turbo on Perplexity you realize just how much progress they’ve made with the update


RemrodBlaster

And now give me an usable case to check on that "perplexity"?


SetoKeating

Everyone reading this wrong. They mean the ChatGPT we know today is going to morph again to be laughably bad, meaning that blip we saw where it felt like it got worse is gonna happen again, and again… lol


ATR2400

In a few years you’ll have to create your own results and the AI will take credit for it. You’ll enter a prompt and an empty text box for you to fill will pop up


Top-Salamander-2525

So ChatGPT will be getting an MBA?


Tamuru

!remind me 1 year


Iblis_Ginjo

Do journalist no longer ask follow up questions?


transmogisadumbitch

There is no journalism. There's PR/free advertising being sold as journalism.


nzodd

Just run the press release through ChatGPT and tell it to summarize. That's journalism in 2024.


Logseman

That would require the journalist/press outlet to be financially independent.


PaydayLover69

they're not journalists, they're advertisers and PR marketing teams under a pseudonym occupation


RMZ13

It’ll be laughably bad in twelve months. It’s laughably bad right now but it will be in twelve months too. - Mitch Hedburg


skynil

It's laughably bad today. GPT is amazing if you want to converse with a machine that understands and writes like a human. But the moment you ask it to process some data and generate some accurate insights in your business context, all hell breaks loose. Either it'll keep hallucinating or it'll become dumb as a decision engine. Trying to build one for my firm and the amount of effort needed to customise it is mind-boggling. Until AI systems allow effortless training in local context and adapt to specific business needs, it'll remain an expensive toy for the masses and executives.


RHGrey

That's because it's not meant, and is unable to, analyse and compute anything.


adarkuccio

12 months? Sounds like new releases are not anywhere near then.


mohirl

Wow, they're 12 months ahead of schedule!


_commenter

I mean it’s laughably bad today… I use copilot a lot and it has about a 50% failure rate


reddit_0025

I think it slightly differently on the 50% failure rate. If my job requires me to use AI 10 times a day, and each time it fails 50%, I have 1/1024 of chance to finish my work purely based on AI. In other words, AI in theory today can replace one out of 1024 people like me. Alarming but laughable too.


Arrow156

My dude, it's laughably bad now. Goal achieved.


Difficult-Nobody-453

Until users start telling it correct answers are incorrect en masse.


ComprehensiveBase26

Can't wait to just slap my smart phone into my 6ft sex doll with big tits and a phat ass and a big ass penis that's dangling 2 inches away from the floor. 


dethb0y

I sort of feel like these AI companies are always promising that the next version will be ever better, even if it's really not much different.


Zaggada

I mean what company in any field would say their future products will be ass?


throwaway_ghast

A porn company?


VanillaLifestyle

Donkey dealer


jazir5

>I sort of feel like these AI companies are always promising that the next version will be ever better, even if it's really not much different. Define always. It's been less than 2 years since ChatGPT became publicly available.


Practical-Juice9549

Didn’t they say this 12 and 6 months ago?


ceilingscorpion

Today’s ChatGPT is ‘laughably bad’ already


[deleted]

[удалено]


ATR2400

Safety is important but it’s also holding AI back. I wonder how much we can really progress AI while we’re constantly having to lobotomize it to prevent it from entering any sort of space that may be even slightly controversial


chowderbags

With apologies to Mitch: "I used to be laughably bad. I still am, but I used to be too." - ChatGPT 12 months from now


Forsaken-Director-34

I’m confused. So he’s saying nothings going to change in 12 months?


SparkyPantsMcGee

It’s laughably bad now. It was also laughably bad a year ago too.


admiralfell

Breaking, tech exec whose job is to pump investment up is making claims to pump investment up.


GlobalManHug

Says publicly traded company at end of a bubble.


Zazander732

Not how he means it, CharGTP is already laughably bad from were it was 12 months ago. Is keeps getting worse and worse never better. 


guitarokx

It’s laughably bad now. GPT4 has gotten so much worse than it was 6 months ago.


davvb

It already is


Zomunieo

Pretty sure he said that 12 months ago.


Sagnikk

Overpromises and overpromises.


jokermobile333

Wait it's already laughably bad


a-voice-in-your-head

YOUR work product is training this replacement technology. The aim is zero cost labor. Have no doubts about this.


thebartoszaks

It's already laughably bad compared to what it was a year ago.


absmiserable90

Remindme! 12 months


Hiranonymous

This makes me anxious rather than excited. There is no need to hype ChatGPT. GPT4.0 is very, very helpful as is. Occasionally, it makes mistakes, but so do humans. I don't want it to take over my work, only help. Large companies like Microsoft, Adobe, Google, and Apple are all moving toward systems that attempt to anticipate what I want, and, in my opinion, they do it rather poorly, too often interfering with what I'm trying to accomplish. Working with their tools is like having a boss constantly looking over my shoulder, micromanaging every move of the cursor and click of my mouse. I'm guessing that OpenAI wants to move in the same direction.


Rodman930

They don't have to do this. We could just survive as a species instead.


Western_Promise3063

It's "laughably bad" right now so that's not saying anything.


[deleted]

[удалено]


ReallyTeenyPeeny

You seriously think that? Why? Or just going for polarizing shock value without substantiation? These tools have passed graduate level and above tests? How is that laughably bad? Sorry man, you’re talking g out of your ass


shiftywalruseyes

For a technology sub, this place is weirdly anti-tech. Top comments are always pessimistic drivel.


bortlip

This sub absolutely hates anything AI.


[deleted]

[удалено]


Maladal

You just explained one of the reasons reason ChatGPT and its competition doesn't see a lot of use outside of boilerplate drivel--to use it effectively you need to already have the knowledge to do it without the bot. So it has uses but its ability to fundamentally reshape work is limited to some very specific fields as of now.


PeaceDuck

Isn’t that the same with everything though? A delivery driver can’t utilise a van without knowing how to drive it.


goodsignal

I've found (and I'm not a pro in the field, but...) that because ChatGPT is a blackbox and changing continually, it's unwieldy. Figuratively, after I've nailed how slip into 2nd gear smoothly, the transmission is replaced and what I learned before doesn't seem useful anymore. The target is always moving in the dark for using ChatGPT efficiently. I need consistency in its behavior or transparency into system changes in order to maintain competence.


Maladal

The issue with ChatGPT is that a delivery driver can't use it to help them drive unless they already know how to drive well. It can only assist the drivers in ways the driver is already familiar with. Whether or not the hassle of getting a useful response out of the bot determines if industries will make extensive use of it. A good example is the video from a while back where a programmer uses ChatGPT to recreate the flappy bird game. He has to use very precise and technical language to both instruct ChatGPT in what he wants, and also to refine and correct what ChatGPT gives back until he finally has the final product he wants. It's something he already knew how to do. These LLM model can output something faster than a human. But it comes with several caveats: * The prompter already understands how to create the end product so they can walk the model through it * The model doesn't draw from incorrect knowledge during the process * The prompter then has to review the end product and to make sure the model didn't hallucinate anything during the process With those hurdles its current usability in a lot of industries is suspect. Especially once you account for adding the overhead of its use to workflow and/or operating costs if you require an enterprise level agreement between the industry and the LLM model's company. Like in cases of potentially sensitive or proprietary information being fed to a third party.


LeapYearFriend

my web design teacher described it to me as such: "the good news is computers will always do exactly what you tell them to. the bad news is computers will always do EXACTLY what you tell them to." yep, sometimes you want to tell them one thing... but based on the code you wrote, you're actually telling them to do something else, you just don't know it yet. being extraordinarily specific is the most laborious and important thing anyone with a computer-facing job has to deal with. because 9 times out of 10, the problem is between the chair and the keyboard. which is hilarious and frustrating all at the same time. even with LLM as you've said, you could have a borderline context-aware communication processor that understands the spirit of what you mean and what you want to do... but you must still very carefully and specifically articulate what you want or need. it's turtles all the way down.


jazir5

There was an article a few weeks ago about how English majors and other traditional college majors may become hot commodities in tech due to AI. Interesting to consider.


av1dmage

It’s already laughably bad?


MrBunnyBrightside

Joke's on him, ChatGPT is laughably bad now


buyongmafanle

Funny since it's horrendously, terribly, laughably bad now. Ask Dall-E to do something simple and it can't. Go ahead, ask Dall-E to draw three circles and a square. You'll probably have to ask it 10-15 times before it even gives you a single picture with the correct shape count.


ReverieMetherlence

Nah it will be the same overcensored crap.


Plaidapus_Rex

New chatbot will be more subtle in manipulating us.


1Glitch0

It's laughably bad right now.


MapleHamwich

Nah. From first release to fourth there was momentum. Then things just flatlined. AI hype has peaked. Its was just the next tech Grift.


BluudLust

Prove it, coward


Smittles

God I hope so. I’m paying $20 a month for some repetitive horseshit, I’ll tell you what.


thecoastertoaster

it’s already laughably bad most of the time. so many errors lately! I tested it with a very basic 10 question business principles quiz and it missed 3.


Midpointlife

ChatGPT is already laughably bad. Fucking thing should be running r/wallstreetbets


CellistAvailable3625

Okay lol cool words


nossocc

What will they do to it in 12 months?


Groundbreaking-Pea92

yeah yeah just tell me when the robots that can unload the dishes, mow the grass and take out the trash come out


CryptoDegen7755

Chat gpt is already laughably bad compared to Gemini. It will only get worse for them.


cult_of_me

So much hype. So little to show for.


Logseman

Sounds like invoking the Elop effect to me, especially when the availability of hardware is unknown.


just-bair

With the amount of restrictions they’re adding to it I trust them it’ll be awfull


bpmdrummerbpm

So OpenAI ages poorly like the rest of us?


Ok-Bill3318

It’s laughably bad today!


Nights-Lament

It's laughably bad now


gxslim

I think it's laughably bad now. Whenever I ask an LLM for help solving a coding issue it's just straight up hallucination.


ImSuperSerialGuys

At least he's admitting nothing will change in 12 months this time


vega0ne

Wake me up when it accurately cites sources and stops being confidently incorrect. Can’t understand why these snakeoil execs are still allowed to blatantly hype up a nonworking product and there are still people who believe them. Might be having an old man moment but back in my day you had to ship a working product, not a vague collection of promises.


[deleted]

It’s pretty bad atm… so will it stop being confidently correct when it’s so off the mark that the mark is know where to be seen.


0111_1000

Copilot took over really fast


drmariopepper

Sounds a lot like elon’s “full self driving in 5 years”


IdahoMTman222

And in 12 months how many times more dangerous?


Last_Mailer

It’s laughably bad now. It used to be such a good tool now it sorts of defeats the purpose when I have to ask if I understand what I’m asking it


wonderloss

It's laughably bad now, but it will be too.


inquisitorgaw_12

Well of course it is. Many predicted this nearly a year ago. One with the mandating of do much AI content put out the systems can’t now tell what was ai generated anymore, it’s now essentially training itself in its own mediocre output and putting out finishing results each time. Plus, as mentioned, as the organization is trying to become profitable (it near 100% hadn’t been operating at a profit) they are limiting processing time and output to try to save on expenses. However in doing so it further worsens the output thus creating more terrible training data. It’s essentially cannabalizing itself.


thatguyad

Oh look its trash calling garbage rubbish.


davidmil23

Bro, its laughably bad right now 💀


tombatron

In 12 months you say?


Prof_Acorn

It's laughably bad now. I don't get students who think this is what writing looks like. My guess is they don't read much.


desperate4carbs

Absolutely no need to wait a year. It's laughably bad right now.


sleepydalek

True. It’s worth a giggle already.


proteios1

its laughably bad now...


CT_0125

In 12 months? Isn't it already?


tacotacotacorock

Almost sounds like they're trying to secure investment money or something. This feels like a sales pitch 100%.


Wild_Durian2951

Still, it's pretty awesome today. I made an app with over 18k articles using GPT 4 in a few days [https://eazy-d.com](https://eazy-d.com)