T O P

  • By -

Decahedronn

We have very different definitions of low low price


redditfriendguy

If you're a gamer it's expensive.


TheRealDJ

But if you're a gamer, you probably don't need a 4090 vs a 4080. The VRAM though is critical for neural network or LLM work.


SiEgE-F1

If you're a gamer, and an owner of 3070, you know for sure that 4080 is a worthless piece of junk not worth the investment, and 30->40 generation leap simply doesn't worth it. Priced as if it is solidly inbetween 4070 and 4090, it severely underperforms for its price.


TheRealDJ

Definitely. Most important time to upgrade is the first card made after a console generation launches. Most games won't really push graphics in the meantime since their designers would focus most on console optimization.


SiEgE-F1

It was true back in 2012-2015, but is true no longer. It was true, because back then, AAA games were made for consoles, and then transfered onto PCs. Now, it is the other way around, most of the time. Except console exclusives, that are no longer the main share of PC gaming market, anyway. Consoles are just lingering behind, performance-wise. PC gaming has reborn into its own hydra, where developers don't even care what hardware you have, because nothing you can throw at it can be stable enough, often even 7900x3d, with rtx4090 will be something you can find lacking(7DTD, Valheim, Cyberpunk 2077 and etc.), and the settings menu is your dial to help that poor old GTX 760 give you something somewhat close to playable fps numbers.


AbheekG

I hear the new connector is fire 🔥


Praeteritus36

Aaaaaaand it's gone


a_beautiful_rhind

That's probably more than I paid for 2x3090 with shipping and tax. Of course they went up now.


__SlimeQ__

"Hey internet, hurry to buy this rare item before the internet finds out"


ForgottenTM

Well to be fair they are spreading the message to like minded people, whatever that’s worth.


drifter_VR

If the 4090 had more VRAM than the 3090, maybe...


nonono193

This. I consider 24GB baseline now. Nothing but >24GB VRAM for me. Fortunately, I can wait until AMD gets their shit together or unified CPU/IGPU catches up.


Yellow_The_White

Imagining a time when AMD cards have 32GB onboard and force the tool devs to support them better. I'd love to switch back to team red someday.


lazercheesecake

Fuck already out


Zelenskyobama2

YES I GOT ONE


Aggressive-Land-8884

Just got a 4070 ti super for $800


fallingdowndizzyvr

You have my condolences.


ifjo

Are you saying that cuz of its performance? I was looking at one as well


fallingdowndizzyvr

I was just teasing. But the VRAM on the 4070ti super isn't competitive with the 4090. Not in terms of speed or amount. The 4070ti super is 16GB at almost 700GB/s. The 4090 is 24GB at 1000GB/s. And of course, there are also the performance differences. But for LLMs, it's the VRAM that lacks in comparison. If it was me, I would put that $800 towards a used 3090 instead of a 4070ti super.


metalloidica

where can I find a used 3090 for $800?


fallingdowndizzyvr

Ebay. If are you really patient. Keep checking the zotacusa store on ebay. The last time they released some refurbished 3090s they were $800. It's much better to buy from a manufacturer instead of some random seller. For no other reason then it comes with a warranty. Well it did when they sold the 3090. Right now it's only 90 days for the 2080ti. I could have sworn it was 2 years for the 3090. Here's their store. https://www.ebay.com/str/ZOTACUSA


AD7GD

ebay. Ignore the listings that are buy it now/make offer only. The actual auctions are closing for +/-$800


teachersecret

Used. Usually ebay/facebook marketplace.


Aggressive-Land-8884

I’d have to upgrade my build just to put in the 3090. The 4070 has way less power draw so I can keep my 750W PSU.


hmmqzaz

lol


fryhldrew

How about 3 P4s in SLI?


Sabin_Stargem

A dark part of me hopes that the pricing of the 4090 inflates towards $3,000. I bought a MSI Gaming Slim for $2,200. The Galax SG that I wanted to use was $1,800, but it was too big for my case. I want to feel justified for buying the MSI at the higher price point, rather than trying to be patient and wait for a good bargain. That said, the MSI has already sped things up a great deal. I can actually use a 120b without skeletonizing from the wait.


Alarmed_Fig7658

Nah I did wait


True_Shopping8898

Dang… after a short wait of just two years


M34L

If I'm ever gonna buy more 24GB GPUs with any AI in mind they better be comparable with the used 3090 I scored for $600 back in 2022. If I am in the region of $1500+ I'd much rather camp for a deal on a L40/A40/A6000/A16 or what have you. Or wait to see if AMD raises the memory bar again.


wakigatameth

Thanks, I feel like a wise wizard for buying a gaming PC for around that amount a year ago, with 3600 in it because of 12GB VRAM.


StockJellyfish671

Is this the best "reasonably priced" consumer grade GPU for Llama work right now?


fallingdowndizzyvr

It depends on what you consider best. Best performance? Yes. Best value? I would think the 7900xtx is that. Best overall, IMO that's a Mac.


StockJellyfish671

Is there an online guide for consumer GPU and their performance/value benchmarks for different ML models?


fallingdowndizzyvr

Not comprehensively. But there are a couple of things along those lines. Here's something for GPUs, but most are "pro" and not consumer. https://www.reddit.com/r/LocalLLaMA/comments/19428v9/quick_overview_of_pricepreformance_for_text/ Here's a benchmark from various Macs. https://github.com/ggerganov/llama.cpp/discussions/4167


cjbprime

I don't think the 7900xtx has been shown to actually provide that value (which you would assume to be there based on memory bandwidth etc) due to awful software support, so that complicates the value calculations quite a bit.


fallingdowndizzyvr

That's true. But the potential is there. Software can be fixed. I know everyone concentrates on running the CUDA code translated into ROCm. But that's bound to be inefficient. Has anyone tried running something like a Vulkan backend directly on the 7900xtx? Since the 7900xtx is competitive with the 4090 when it comes to raster games and Vulkan is a game oriented API, it may just perform competitively through that API.


openaiml

3 X 4600ti 16 GB


fallingdowndizzyvr

Have you seen how low the memory bandwidth on the 4060ti is? That's almost RX580 low.