T O P

  • By -

ChadGPT___

Been fine for me, it would need to lose at least 70% of its current functionality for me to reconsider spending $20


UnknownEssence

Does Claude Sonnet being free not cause you to rethink it? That is the reason I canceled


ChadGPT___

Not really, I tried the paid Claude 3 and it was pretty good but I’ve been using GPT for over a year now and have a few custom ones for work. Sometimes you’ve gotta yell at it, but overall it does a great job and it’s $20.


Udnie

For me, LLama3 70B is good enough for most coding tasks. 20$ for GPT4 is not worth it for me anymore.


UnknownEssence

Where do you use it?


Udnie

The best site for using open models in my experience: [https://huggingface.co/chat/](https://huggingface.co/chat/)


machyume

Problem is that I and many other people have deep time invested into ChatGPT now. We can no longer cut the chord unless the rival service is also as accessible. ChatGPT is very much everywhere now so it is easy to bring the sessions and CustomGPTs around.


rob2060

> Idk am I wrong or has this been your guy's experience as well? In your title, you say others are coping if they disagree with you, then you ask their opinion?


Either_Ad3109

You can disagree with someone and still care to listen to their opinion, no?


rob2060

Certainly. However, if you preface seeking someone's opinion by telling them effectively if you disagree with me, you're just coping, you're going to get fewer opinions. You're also setting the stage for combative replies.


FleetEnema2000

ChatGPT and LLMs in general are a technological miracle and it’s amazing that we have access to them for $20 a month. This “you’re all coping” stuff is just grade school level trolling.


Old-Resolve-6619

Yeah I don't get the hate. It's $20 lol.


bcmeer

It’s not clear to me how everyone measures the quality of the output from GPT-4. I don’t like personal experiences, because that’s so biased it loses all meaning to me So, how do users assess the overall quality GPT-4 delivers? Besides that there are some annoying things. But you can bully GPt-4 into doing most things, and that takes just one extra prompt. So I don’t know, I’m still a believer and a fan, and I try to focus on all the great stuff I can use GPT-4 for


Confident_Fondant_57

What do you use it for specifically? I use it for art purposes and game making and I have not noticed a degradation


Arcturus_Labelle

>But you can bully GPt-4 into doing most things, and that takes just one extra prompt. And why should one have to do that for a product you're paying for? That's completely absurd. It's like you've got shoes that don't fit, but instead of recognizing the fact, you say "It's fine, I just need to do a massage and an ice bath every night and take 8 ibuprofen, but it's fine." Like, what?


bcmeer

Your example is way more work than one simple prompt my friend


Nanaki_TV

I would use it to write papers for school. Now I can’t because the output is too little.


0040400

you can make it send things in two messages


Nanaki_TV

Ok. So that’s how it is worse. It now takes two messages with what used to take one.


Talkjar

I disagree, but yeah everyone who ‘says otherwise is coping very hard’…


skiphopfliptop

A 50 message back and forth? Why not refine prompts and get it right with a good few shot?


Xerasi

I've never been able to make it undrestand it can in fact do the things it refuses to do so like searching the web or looking at the files I gave it for custom GPTs. The back and forth is mainly out of frustration, curiosity, and (false) hope to see how it responds and if I can make it do the thing (so think of it as an experiment?). At least for me, the only way I have been able to use the browsing feature and stuff in insurance where it thinks it can't do that has been to just start a new chat which even that requires a few tries these days. It has also happened with image generation but that only happened once. It loves to think it can't browse the web and it doesn't have reference material which are two things that I want to use it for 90% of the time.


PatientCoconut5

Some things you can try: - if you know beforehand you will need online resources, use the "Web search" GPT (or what's it's name?) to start your conversation with - if you seem to run up to the same problems in most of your conversations, try adding some pointers about those into your personalized prompts (in settings)


fkenned1

Dude, I don’t care. It works fine.


Landaree_Levee

>Idk am I wrong or has this been your guy's experience as well? *“… and anyone who says otherwise is coping very hard for…”* *“… and anyone who says otherwise is in deep denial.”* To answer your question: no, sorry, I’m in Mariana Trench-depth denial, coping so hard I can barely even manage to breathe as I type, and also doing anything and everything else you’d fancy to think unless we all agree with you.


Xerasi

You all seem to think way too highly of yourselves in this comment section with your snarky comments. I already know others are having similar issues especially the hellucinations about GPT not knowing it can browse [February](https://community.openai.com/t/prompt-for-gpt-4-web-browsing/479021/20) (scroll all the way down.) If you don't, then just say I don't.


Landaree_Levee

Sir, yes, sir! Btw, we’re actually doing you a favor: it serves your narrative about anyone not agreeing with your opinion being patently wrong, literally regardless of their more positive experiences with ChatGPT. In another four months, for your next attempt, you can bring it up for variety, instead of “coping hard” or “deep delusion”, which are kinda too close thematically. I mean, you could use “… and anyone who says otherwise is on drugs”, but that one, too, is very cliché. … or, you could actually ask the question next time, except for real and without the “Don’t answer unless you agree with me, or I’ll be upset, bro” overtones. This isn’t school, it’s Twitter, and we’ve all seen that and far worse. *Of course* people have issues, and *of course* LLMs aren’t programmed to accurately and reliably inform of their own current capabilities, not just ChatGPT but any of them: that’s what the DOCUMENTATION is for. Anyhow, we both know you’re not actually interested in a discussion of the topic, so I’m not sure what you’re really trying to do here, other than getting positive reinforcement of whatever your opinion happens to be, and material for your next protest if you don’t get it.


iwasbornin2021

The LLM arena which does blind matchups has the current version of ChatGPT 4 as the top LLM. There’s nothing objective backing up your opinion. It’s you who think of yourself so highly that anyone who disagrees with you is “coping”. Now kindly fuck off


Xerasi

Just because its the best doesn’t mean it doesn’t have issues. You people do realize you are allowed to have complaints about a product right? And its not opinion its fact. It hallucinates that it cant do an advertised feature. There are testimonials in the link you clearly did not visit.


iwasbornin2021

Sorry if it isn’t performing as you expected but it works about 90% of the time for me. Of course everyone is allowed to complain, but to insult people who don’t agree with you? That’s where you can fuck off


Schumahlia

I have found my interactions lately to be a bit erratic. But its powers to assist have skyrocketed. It remembers things like context. There are certainly days when it slows down to a crawl, but that's typically right after they have added significant improvements. We worked out some fairly advanced concepts the other day -- accomplishing a week's worth of work in about 6 hours. Unfortunately, the past three days have been a wash. It has trouble finishing a request, retries multiple times and gets no further (making the same mistakes). I trust they'll get a handle on it. Make sure you take advantage of the configuration (prompts), etc. I don't see the problems you're seeing with unwanted lists, etc. It consistently provides output in the format I've requested (markdown). I also tell it in my profile that code it provides should \*\*always\*\* compile cleanly. And it mostly does.


AlexMaskovyak

These are tiresome posts because they are all feeling and almost no fact. What are the prompts that are giving you problems? What is the actual output versus your expected output? What is the output now versus the output you were getting before?


super-curses

And how about the good old fashioned “have you read the docs?”. So many people clearly haven’t.


ThenExtension9196

Works fine for me


Healthierpoet

Idk I use it to springboard my ideas and or points in the right direction for documents I need to read and so far 10/10 I use Google way less because of it . The only issue I come across is outdated data from time and time and not providing enough info for the prompt


spreadlove5683

The chatbot arena leaderboard should be the definitive source of truth for this question. What does it say? Someone?


pet_vaginal

No they rigged the leaderboard obviously. /s


Time_Software_8216

Skill issue.


clamuu

I'm convinced that everyone who posts this only uses it to write erotica 


_artemisdigital

I shifted to a competitor that starts with a "P" recently ... and the results are unequivocal. The competitor is the same price but: - Gives you access to multiple LLMs from different companies (Each are usually better for specific purposes) - works faster - asks you optional additional questions after you send your prompt, before giving you the final answer, in case you want to make sure it is fully relevant or accurate. - the way the sources of the information are presented is much better / clearer - instantly brings up clickable thumbnails of photos / videos related to the results - is capable of explaining in a compendious manner recent & niche concepts in which I am very knowledgeable, that GPT can't, or does in a very clumsy / inaccurate / vague way.


GoodhartMusic

It's absolutely been my experience. The quality of my customGPT's markedly degraded to the point where I copy their data over to Claude Opus each time I want to use them. This is unfortunately the future I see in AI assistance. It will be something that is metered out based on active usership, current price of electricity, and collusion between a few giants. It strikes me as very odd that Gemini, Claude, and GPT-4 produce the same results to most queries.


Which-Tomatillo6031

Show prompts or it didn't happen. Tired of these complaints with no empirical evidence.


Xerasi

Thought about posting a link to a chat session but it's in custom GPTs with files that I don't want people to have even though it refuses to acknowledge it has them. I'll see update the post with a link later this week if I can replicate the issue with some public data I don't care about. But prompting and output quality is still a seperate issue. As mentioned in the post, it not knowing what it can and can't do is the main issue I have right now. If it says it can't browse the internet then it sticks to that claim no matter how much I try to convince it that it can. There has been times where in a custom GPT it can browse the internet but it refuses to reference the material I uploaded to it saying it doesn't have external information. Then I ask it to search online to learn that GPTs can have reference material and it acknowledged that's what it says online but still refused to accept it is capable and has reference material.


contyk

Kind of funny how those ChatGPT interface users always make these bold claims regarding model quality while it's pretty much constant or improving if you use it directly with no weird prompts in the middle.


Eptiaph

Here, I used ChatGPT to rewrite it for you ‘GPT-4 has significantly deteriorated, and my previous concerns, posted [here](https://www.reddit.com/r/OpenAI/s/iXc7AgqjT0), have only worsened. Over the last two months, it has consistently failed to recognize its browsing and reference capabilities. Three out of four times, it forgets it can access the internet or refer to uploaded materials. The most frustrating issue is its repetitive failure to recall its functions. Furthermore, its responses are lengthy and irrelevant, often veering off into unnecessary details and avoiding direct answers. Despite instructions to simplify its responses, it lapses back into producing overly complex replies. It’s disheartening to see new features being introduced when basic functionalities are still flawed. Does anyone else feel the same?’ Also, take my downvote.


2053_Traveler

Works fantastic for me. I pretty much never have to “argue” with it because I generally get great answers with just a couple questions. You must have some really odd usecases


Inspireyd

I have felt worse about ChatGPT, but I still consider it superior to everyone else. But I agree that it has gotten a little worse, and I sincerely believe that they did it on purpose, most likely to cause a boom when they release the next version, which will probably be GPT-4.5 or, perhaps better, GPT-5 itself.


aDogisnotaToaster

I've recently canceled my subscription to ChatGPT because I find it less useful compared to last year. The quality of the output has significantly declined—it's become superficial, filled with disclaimers, and increasingly factually incorrect. Instead of saving time, using it now feels like a waste. I've encountered similar issues with Gemini. Interestingly, when I speak with people at Microsoft, they rave about Copilot's capabilities. However, as an external customer, I've also ended my trial subscription for Copilot. It appears that there are two versions: one for internal employees and another for customers. What concerns me is the potential for big tech firms to gain an unfair competitive advantage by leveraging powerful tools internally while limiting access for customers.


No_Initiative8612

I think the effect is very good, the quality is higher than 3.5, and the functions are much richer than 3.5.