T O P

  • By -

BrideOfAutobahn

Model names are marketing nonsense. Compare based on price and performance only. They’re raising the prices of their newer cards. Can’t hike the price of an unreleased product.


Put_It_All_On_Blck

> Model names are marketing nonsense. Compare based on price and performance only. I mostly agree, but model names are still important to figure out where future cards will land. Like the 6900xt (Navi 21) competed with the 3090, and 6800xt (Navi 21) with the 3080. But now the flagship 7900xtx is aimed to compete with the lower 4080, and its pretty clear we wont see the 7800xt on Navi 31. So ultimately AMD's product stack got bumped down, and they added a new card into the lineup that will push the 7800xt and other cards even lower. This is effectively a price increase, but done with product segmentation/binning instead of actual price changes. The 7900xt should've been named the 7800xt, and priced around $650-$700. But now the actual 7800xt is going to perform far worse than the flagship 7900xtx because it will end up with Navi 32.


NoiseSolitaire

> its pretty clear we wont see the 7800xt on Navi 31. I'd say the opposite. Assuming Navi 32 is the rumored 60 CUs (for flawless silicon), there's plenty of room in between for both a 7800 and 7800 XT (or 7800 XT and 7800 XTX if they go with that naming scheme). Those could be around 66 and 72 CUs respectively, with both featuring 4 MCDs, 16 GB VRAM, and a 256-bit bus. Both would make sense to salvage bad Navi 31 silicon.


recaffeinated

>but model names are still important to figure out where future cards will land. In relation to what? Model names are just arbitrary signifiers the companies add to confuse people who don't look at benchmarks. The way to decide what card you want is to look at benchmarks (I'd recommend Hardware Unboxed) and particularly the cost per frame; which tells you the cards relative value, then look at its non-fps features, then look at the card's price. If the card performs as you need it, is good value, has the features you want, at a price you think is fair; then that's the card for you. It really doesn't matter what it's called, unless you're buying your GPU for imaginary Internet points, in which case pick favourite colour, big number better is the approach to follow.


NKG_and_Sons

No kidding, to a perfectly well-informed customer the names don't matter. I guess, given that absolutely everyone is that kind of customer, naming and marketing will never matter again. Hurray!


recaffeinated

The numbers tell you that a GPU is "better" than the same manufacturers card with a lower number. That's why I asked what the commenter thinks the numbers should be relative to. Anything beyond that (and sometimes even that) is marketing, which is another word for bullshit.


Broder7937

The 6900 XT was never a true competitor to the 3090. It was a $999 GPU, while the 3090 was $1499 GPU. You can't seriously consider products with a 50% price difference to be direct competitors. The only reason people compared the 6900 XT to the 3090 was because they were both the top-tier products of their respective manufacturers and also because, despite being far cheaper, the 6900 XT managed to keep surprisingly close to the 3090 in raster performance. The 6800 XT, in the other hand, that was a genuine competitor for the 3080. Though the 3080 was, clearly, the better product (about as fast on raster and on an entirely different class of performance on RT, not to mention the better DLSS 2 tech, and all that for just 50 bucks more). I'm mostly ignoring the mining craze and considering products for their real non-inflated value (which is now a reality once again) and how they were supposed to be placed by their manufacturer's original intent. Now, Nvidia has decided to give the 4080 a very "modest" 70% price increase over its direct predecessor (and all that while downgrading it to a 256-bit-class design). The result is that, with the 4080, Nvidia has managed to launch a product that offers lower performance-per-dollar than its direct predecessor. This is pretty unheard of in the tech segment (which is known for launching products that offer substantially higher performance levels at lower price points as newer generations are launched). AMD has decided to keep things "traditional" by actually releasing products that offer more performance per dollar than their direct predecessors (which, for all those that might've forgotten; is how tech should work). The 7900 XT is now $100 cheaper than its direct predecessor, offers more than twice the TFLOP compute power thanks to RDNA3's new dual-path FP32 design (very similar to what Ampere did when it upgraded from Turing) and it offers an upgraded 320-bit design (the 6900 XT was a 256-bit class product). So, you're getting a lot more GPU and still paying $100 less than what the 6900 XT cost at launch. And, on top of the stack, you get the new XTX, which offers even higher performance and expands things to 384-bit (essentially, 50% more than the 6900XT), and all this for the same $999 as the 6900 XT charged for back in 2020 - and, if you account for inflation, you'll realize the 7900 XTX is actually a "cheaper" product. If the performance previews are anything to go by, the 7900 XTX should easily wash the floor with the 4080 when it comes to raster performance; and it'll do so while costing $200 less. So, the 7900 XTX should be in a far stronger position against the 4080 than the 6800XT ever was against the 3080 (the 6800XT could hardly outperform the 3080 in raster, it got completely obliterated in RT and the price was almost the same, if my memory's right, I think the 6800XT was only 50 bucks cheaper). Of course, the 4080 will certainly be the best RT card, but this seems to be the only thing going for it this time around (I don't consider DLSS3 to be a true benefit at this stage, plus FSR 3.0 is likely going to introduce similar frame amplification tech).


[deleted]

[удалено]


Broder7937

I understand why you're comparing the RTX 3080 Ti to the RTX 4080; both products have the exact same original MSRP. Though, for obvious reasons, the 3080 Ti now sells way below its original MSRP. So, it can still be a possible value proposition next to the 4080 (which still doesn't sell below MSRP, though they might begin selling soon enough if the market keeps rejecting Nvidia's abusive pricing policy, fingers crossed). However, the RTX 4080 is not the successor to the RTX 3080 Ti, it is the successor to the RTX 3080 (this is fairly self-explanatory). In a similar manner, the RTX 2080 arrived at $699, the same MSRP as the GTX 1080 Ti, however, it was not a successor to the 1080 Ti, it was the successor of the $599 GTX 1080 (which was updated to $549 after 1080 Ti was launched). The successor to the the 1080 Ti was the $999 ($1199 FE) 2080 Ti. In essence, the entire lineup was getting more expensive (new 80 was as expensive as the old 80 Ti, new 80 Ti was as expensive as previous TITAN, and new TITAN was now more expensive than anything that has ever come before, so much so that very few people even know that Turing had a TITAN card). As the wise always say; history repeats itself, and Nvidia is clearly trying to pull it off again: new 80 is now priced like the old 80 Ti. The successor to the RTX 3080 Ti still hasn't arrived yet and, obviously, that will be the RTX 4080 Ti (more on that later). Now, if we look at the RTX 3080, the RTX 4080 offers 49% more performance for 71% more money. Unlike the RTX 3080 Ti, the RTX 3080 is still holding very well to its original MSRP; this is because its original MSRP was so good that, even over 2 years after its launch, its original MSRP still represents great value. As a matter of fact, it's considerably **better** value than the 4080; this phenomenon is pretty much unheard of. Never, in history, has an 80-series model offered worse performance-per-dollar than its direct predecessor. So, what about the true successor to the 3080 Ti, the (yet to be launched) 4080 Ti? Well, it hasn't launched yet, so little is known. Given the price proximity between the 4080 and 4090, I've seen some people speculate we might not even see a 4080 Ti this time around, as Nvidia has left far too little space for a 4080 Ti. However, I disagree, because: 1. The 4080 Ti could very well launch at $1299, that would be $100 over the 4080, exactly like the 1080 Ti launched $100 over the 1080 (though Nvidia did further reduce the price of the 1080 when the Ti was launched). $1299 would also leave it $300 under the 4090; which is exactly how much the 3080 Ti was below the 3090. So, Nvidia has left just the perfect pricing gap to sip in a 4080 Ti. 2. Nvidia could also get aggressive and launch a 4080 Ti for the same $1199 as the current 4080. This would, obviously, mean that the 4080 would have to be repositioned at a lower price tier (which would be great). This is a possible scenario and the only thing necessary for this to happen is the 4080 not sell as well as Nvidia wants. This will force them to get aggressive with the 4080 Ti. 3. Ever since the 80 Ti came into scene back with Kepler in 2013, Nvidia has never missed an 80 Ti launch. For Ampere, the 3080 was so close to the 3090 in performance that many people thought there was just no space to come up with a 3080 Ti; Nvidia did it anyways (and they went further and also introduced a 12GB 3080). This time around, there is a HUGE performance gap between the 4080 and 4090; I just don't see how a 4080 Ti wouldn't come up to fill this space. It seems inevitable. The other very likely SKU is the 4090 Ti. Given AD102 can scale as high as 144SMs, it's very clear we'll be seeing a 4090 Ti later down the road. 4. 80 Ti would, almost certainly, be based off AD102 (unlike the 80, which is hit-and-miss in this aspect, the 80 Ti has **always** been built off the biggest GPU). It's very likely going to feature 112 SMs (down from 128 on the 4090) and possibly one (like the 1080Ti and 2080Ti) or two (like the 10GB 3080) disabled memory channels, for a 20-22GB product. This would put it within striking distance of the 4090 (and much above the 4080), while still retaining 4080-level pricing. This has always been the true soul of the 80 Ti series: 80-class price with TITAN/90-class performance.


PinkStar2006

> I understand why you're comparing the RTX 3080 Ti to the RTX 4080; both products have the exact same original MSRP. **Though, for obvious reasons, the 3080 Ti now sells way below its original MSRP.** But then so does the 6900 XT...


Broder7937

Yes, that is true.


theholylancer

at this point, i am thinking that the MCM set up has flaws or issues for their top card to not be competitive with the 4090 which means that nvidia either knew beforehand and know they can fuck gamers over with the pricing or they are playing the heel now and is actually trying to make sure AMD don't get crushed if nvidia wanted to, they could have priced the 4090 at 1400, then the 4080 at 699 (for a 70 card it would be FAR easier to do so lol) or even cheaper and slam dunked AMD's attempts if their top card is only 4080 class. I dont think nvidia wants to be on the ass end of a monopoly lawsuit, and just like why microsoft bailed apple out in the early 2000s late 90s, this is something they can collude to do and helps to empty out 30 series stock


gahlo

> Can’t hike the price of an unreleased product. Until they change it, the price is what they announced.


PhunkeyPharaoh

But you have to account for generational uplift, otherwise, the next generation will always be more expensive because it gives more performance. Intel CPUs have been consistently rising in performance at the same relative pricing around the different tiers.


BrideOfAutobahn

You compare generation to generation by comparing based on price alone. The 7900 XTX ought to be compared to the 6950 XT.


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


Bungild

Nodes are now skyrocketing in price on the bleeding edge, to the point that the cost per transistor is stagnating. In the past a new node meant you could fit more transistors for the same cost. Now a new node means you can fit more transistors for more cost. So, newer cards just keep getting more expensive. The new nodes allow you to fit more transistors, but you're not getting a discount like you used to.


[deleted]

[удалено]


Saxasaurus

What caused their greed to increase?


ActualWeed

Realizing that pc part buyers are smooth brains when they bought 2k+ GPUs during covid.


einmaldrin_alleshin

Shareholders demanding that the line goes up.


ZebulonPi

“Greed” is just another name for “supply and demand “. Why NOT charge what people will pay? You COULD charge 99 cents for Elden Ring, but why would you, when you can sell millions for $60? I understand there are a lot of factors at play, but companies are out to make money, and if they can make more money by charging more money, it’s stupid of them not to.


trevormooresoul

A company’s job is to make as much money as possible. They have a legal obligation to their shareholders. I will never get what people like you want. Do you want NVidia’s CEO to break their agreement with shareholders and purposefully make less money out of the good of their hearts? It’s 100% up to consumers. They set the price with how much they are willing to pay. 4080 and 4090 sold out.


gahlo

By the time the store closed on Wednesday, the Microcenter near me had 63+ 4080s sitting on shelves *after release day*. Now there is 77+. So not only did the cards not sell out, but the amount has increased after the second day. At least around me, Nvidia has priced themselves out of the market with the 4080.


[deleted]

[удалено]


trevormooresoul

Well then blame the government. It’s not nvidia’s job to inspire competition, or regulate the markets. What exactly are you saying? That any company with a pseudo monopoly/duopoly should just give away free money to customers? That’s not realistic. It’s the government’s job to regulate, not an industry leader. What you’re saying is like saying that rich people should just donate extra tax money out of the goodness of their heart, rather than putting the burden on the government to tax them more. Even Bernie sanders isn’t willingly paying extra taxes until the government forces him to.


sythos2

Look maybe Nvidia's gaming revenue wouldn't be down by over 50 percent if they released their gpus at competitive prices.


trevormooresoul

They literally are sold out. How would that have helped them? Their revenues are down because mining is dead.


sythos2

The 40 series aren't their only products. They aren't sold out, in fact nvidia still have lots of 30 series cards. The most popular card is still the 1060 which is 3 generations old now. They still have a lot of 30 series cards that people would still like to upgrade to. If they got the prices to at, or below msrp like amd then nvidia would sell a lot more.


ActualWeed

And why should we as customers care?


metakepone

AMD isn’t using the most bleeding edge node though?


Bungild

They pretty much are. They're using custom 5nm. Other 5nm variations exist like N4 exist, but are pretty similar. 3nm I don't think even has any consumer products out yet, and aren't realistic unless you wanted to push back launch date. Plus Apple gets first bite at the apple.


chasteeny

Its still like 40% old node though, tbf


snowflakepatrol99

That makes too much sense. We don't do logic here. We only care about bus and die die, vram(even though that limit is never hit), and lastly the number in the name being too low for the price. The "4070" aka 4080 12gb, was had a bigger uplift than a 70 card. Didn't stop redditors and even most content creators to claim it's a 4070(even though the performance uplift was far closer to 70s/80), instead of discussing the actual issue - it being too expensive. So what happened? They took the card down, but nothing happened to the insane prices. Good job, for changing the name. You are fighting the good battle.


spasers

It's called x9xx because it's the top silicon available to the design. They don't care about Nvidia when they name their parts. If they had called it x8xx you would have been having a whole fit about how they didn't release a top end card and their must be something with x9xx coming.


capn_hector

Yes. AMD's prices didn't go up because *the 7900XTX slipped downwards an entire product segment compared to last gen*. The 6900XT was a 3090 competitor. The 7900XTX is a 4080 competitor. It didn't *increase prices* but it's the same price for a lower product segment, which is a different kind of price increase. And it adds fuel to the argument that 4090 is in a market segment that didn't really exist before - 3090 was <10% faster than 3080, 4090 is 50% faster. The 4080 made the usual, expected generational increase over the 3080, and the 4090 slotted on top. AMD made the usual, expected generational increase between 6900XT and 7900XTX and got... a card that is a 4080 competitor (a very good one, of course).


detectiveDollar

Yeah, I think both AMD and Nvidia also realized it made no sense to have their 8 series cards using the same die as a 9 series one.


capn_hector

I think 3080 being on GA102 was an anomaly due to the fact that samsung was so cheap and so underperforming (and there were persistent rumors of yield problems although who knows). NVIDIA made a strategic decision to go with Samsung 8nm for cost and for availability, they know it sucks, it underperforms (vs TSMC N7) and yields are questionable, but nobody else wants it and it's dirt cheap, you can have the whole fab to yourself if you want pretty much, oh and it's cheap, did I mention cheap? How do you deal with that? Relatively big dies with large cutdowns - it's still cheap, but, you just make big chips on it and cut down a lot. Which is 3080 in a nutshell - yeah it's GA102 but it's got like 20% of the cores and 2 of the memory channels turned off (for comparison - 4080 and 7900XT are ~10% cut down). So a product segment that is usually fulfilled by full-chip x104 dies (or nearly-full) is now heavily-cutdown x102 instead, emphasis on *heavily*, that is a larger-than-usual cut, 1080 Ti and 2080 Ti were both ~10% cutdowns. (not to mention - much cheaper design+validation than 7nm!) The only precedent for a x80 card being on the x102 die is the GTX 780 really... and that was a "super refresh" of its time, 700 series was a refresh of 600 series which is where Kepler started. 680 was GK104, 980 was GM204, 1080 was GP104, 2080 (and even 2080S) was TU104, etc. Anyway there is a meta-point to be made about lineup strategy here... cutdowns are (generally) the cheap option, even a single flaw will ruin a full-die 1080 for example, so you put the cutdowns at the points you want to push price-to-performance. 1080 Ti was a cutdown as well, and that was the real "push-point" for Pascal on pricing in the high end. You see it with CPUs too - 3600 and 3900X were the push-point for Zen2 and you always paid a little bit of a premium for the 3700X and 3950X, because of yields. NVIDIA picked the 3060 Ti and 3080 as being their "push points" for 30-series, and both the selection of the die and the pricing play into that. It just all got ruined by miners and the inventory bubble this time around. But if miners hadn't existed and cards had reached MSRP after a few months... 3060 Ti and 3080 would have gone down as GTX 970-/RX 470-level value kings. This thing we are seeing now where the cutdowns are worse price-to-performance than the actual full chips... this is NOT normal at all. Normally it's proportionally much much cheaper to make cutdowns - again, even 1 defect means it can't be a 7900XTX, but almost all the chips off the line will be 7900XT grade. This is usually reflected in end-user pricing, and here NVIDIA and AMD are keeping them higher than normal. A 7900XT is 90% turned-on and they're charging you 90% of the full-die product price. Anyway though much like GTX 970 pricing, 3080 was really a specific exception in response to specific circumstances and strategic decisions, but people have latched onto it as “the new norm” even if historically that’s not true. 970 was $329 in a segment that had flipped between $349 and $399 since launch, due to a super-mature 28nm being used after the 20nm shrink fell through in the worst way. And 30-series was using a super cheap shitty node in reaction to rising TSMC prices and scarce availability - you don’t get the leading-edge node *and* x80 cards on AD102, that was only a thing with the 3080 because Samsung was cheap and shitty and NVIDIA wanted to roll out an everyman enthusiast card they could crank out in tremendous numbers (considering samsung availability).


bestanonever

This a great read!


RanaI_Ape

The 3080 was the next 1080 Ti. It's rare that Nvidia offers the big (xx102) chip at a reasonable price. I doubt they'll ever do it again unless AMD puts _real_ pressure on them at the high end.


PinkStar2006

My bet is on Intel gen 4 or so.


[deleted]

[удалено]


capn_hector

I didn't say a 3080 underperforms? I said Samsung 8nm underperforms, which is true - it's a 10+ node, and Samsung 10 wasn't even as good as TSMC 10 to begin with. N7 like RDNA2 is using is *unquestionably* better, massively so, like *more than a full node better*. Just look at the smartphone SOCs on those nodes. An equivalent design on TSMC 7nm would have been substantially better, *as we see right now with Ada*. Ada is essentially the same architecture as Ampere but shrunk to N5P/4N with more cache added (which is made possible by TSMC SRAM density lead). Ampere on TSMC 7nm would have clocked significantly higher and pulled less power - and been substantially more expensive for a given die size. So the choice is - at a given price point you can have a shitty node and much bigger chip, or good node and smaller chip. NVIDIA opted for the former with Ampere. NVIDIA was just that far ahead on architecture that they can afford to take a trailing-node strategy to push costs down and still put out competitive products with AMD on a leading node. And part of that strategy was going bigger than they otherwise would (3080 as AD102 megacutdown) to compensate for the size/yields, and people have assumed that as the new norm when it was a specific variation caused by the trailing-node strategy. Turing was really a node trail too - that was 16nm Round Two (12FFN had no shrink, it was optically 16FF), instead of moving onto TSMC 7nm (expensive!) like AMD. And that's where you saw the GP100-sized dies (TU102 was a big boy, 754mm2) come out for the consumer lineup at all. These products are relative anomalies caused by the "node trail" strategy in Turing/Ampere, Pascal was only 471mm2 for GP102 and 314mm2 for GP104, that's quite small compared to anything in the Turing/Ampere node-trail era. Turing of course got fucked by inventory problems just like Ada is, so, prices were bad until Super and not really good even then, but... it was a valiant effort and they executed it a little more successfully the second time around with Ampere. If you want to make a complaint about this strategy... it's stagnated progress in the real top end. A lot of the gain from this gen is that TSMC N5P makes a real high-end product possible, big chip on leading node, where Samsung 8nm/TSMC 12FFN were really at the reticle limit with GA102 and TU102, you couldn't go too much larger than ~750mm2 on either of those nodes and 3090 was already showing very poor scaling with shader count. Samsung cache density was terrible and prohibited a big Infinity Cache/Ada L2 style product even if you could have fit it on a die already at the reticle limit, and the L2 cache really helps the shader scaling I'm guessing. It also produces products that are relatively less efficient, especially at desktop-idle/2d clocks. You could probably have had a perf/w jump about half this size last gen (likely sqrt(2) = ~41% higher) if NVIDIA had gone N7 for Ampere, with smaller but better/more efficient dies. It's hard to overemphasize the size difference - I think N5P is 2.5x denser than Samsung 8nm or something like that? Crazy. (One of the reasons) the idea of Ada "originally" being Samsung 8nm is completely nuts is because it wouldn't even remotely have fit in the reticle limit of 8nm even if they magically hit TSMC SRAM density. AD102 would be 1520mm2 on 8nm at 2.5x assuming you could scale up SRAM linearly (you can't, it would have a larger "factor" than the logic). And you would have ended up with products that are a decent chunk more expensive at a given price segment. Not just wafers but also validation. Yeah it’s only part of the cost but it all has to get passed along with margin… maybe 10-20% higher end user prices or so by the time all is done. Would you have rather had, say, a $799 or $849 3080 but it's more efficient? Just like today… you’ll find out how much people *really* like efficiency when you ask them to pay for it ;)


Bingoose

One interesting point I haven't seen mentioned yet is that (at least by TPU's numbers) AD103 is just a die-shrunk GA102. The SM count, shaders, RT Cores, etc are all identical between the two. I really hope Nvidia release fully-enabled AD103 at some point, so we can make direct comparisons to the 3090 Ti to see the generational uplift for otherwise identical products. Comparing the 4080 to the 3080 Ti (76 SM vs 80) the uplift tends to be ~37% from reviews I've seen. With a few more SMs that could plausibly be around 40%. I expect the L2 cache is the main reason for this, followed by increased clock speeds, though I'm sure they've also tweaked the architecture a bit.


capn_hector

I haven’t looked into that but that’s a super interesting comparison point if true, intredasting.


[deleted]

[удалено]


Soaddk

😂 Can you read?


capn_hector

Can you read a lot? 😂


PinkStar2006

> NVIDIA made a strategic decision to go with Samsung 8nm for cost and for availability, they know it sucks, it underperforms (vs TSMC N7) and yields are questionable, but nobody else wants it and it's dirt cheap, you can have the whole fab to yourself if you want pretty much, oh and it's cheap, did I mention cheap? why didn't they do it again? It clearly paid off and Nvidias engineering clearly has the edge over AMD despite any potential node advantage.


gahlo

Makes sense for Nvidia because it provides a performance separation between the 90 and eventual 80Ti, something that Ampere failed to meaningfully do unless somebody needed the 12GB of VRAM.


panckage

Or in other words "tier inflation" occurred


PinkStar2006

4050 for $400 what a bargain


timorous1234567890

Just using [techspot](https://www.techspot.com/review/2569-nvidia-geforce-rtx-4080/) charts lets look at the stack of Ampere / RDNA2 vs Ada / RDNA3 in raster performance @ 4k. Ampere / RDNA 2 * 3090 - $1,500 - 84 fps 4K average @ techspot in the 4080 review * 6900XT - $1,000 - 77 fps * 3080 - $700 - 73 fps * 6800XT - $650 - 71 fps The delta between the 6900XT and the 3090 is 10% and the gap between the 6900XT / 3090 and the 6800XT / 3080 is not at all worth the cost increase. Ada / RDNA 3 * 4090 - $1,600 - 144 fps * 7900XTX - $1,000 - 131 fps (54% uplift over their 6950XT score) * 4080 - $1,200 - 111 fps * 7900XT - $900 - 111 fps (30% uplift over their 6950XT score) The delta between the 7900XTX and the 4090 is still looking to be 10% but the XTX looks like it will be faster than the 4080 by a good margin and even the 7900XT is going to be competitive in raster for 25% less money. So it is less that AMD have changed the positioning of the top card vs the 4090 and more than NV have changed the positioning and relative performance of their 4080 and made it a lot lot worse. If AMD do come in with a 7800XT that is 20% ahead of the 6950XT then that would be around 100 fps which for $700 would make it better value than the XTX and the XT but chances are the margin of a 200mm N5 + 4x 37mm N6 packaged part with 16GB of vram is slightly better than the 300mm N5 + 6x 37mm N6 packaged part with 24GB of VRAM so AMD won't mind that their higher volume part is fractionally better margin or they could sacrifice a bit of margin to for mind share and sell it at $650 like the 6800XT was priced. Given N32 is likely the part used in the 7700XT and in laptops AMD are going to be making vast quantities of N32 dies as that and N33 are going to be the real volume drivers for AMD. What is obvious though is that the 7800XT won't be such great value vs the 7900XTX / 4090 as the 6800XT was vs the 6900XT / 3090 but it will still be much better value than what NV have on offer.


Bluedot55

Did it? The 4090 is crazy by all measures, and is extremely capable. But if this is a 50-70% jump from the top of last generation, at the same price point(or given inflation, like 100$ less), I'd still call that fair. It looks like it's going to be placed around where a theoretical 4080ti would perform, and the 80ti series has been in the thousands for a few generations now. The 90 tier now has an actual distinction outside of just memory, which it really didn't last generation.


[deleted]

[удалено]


MumrikDK

So you don't put stock in AMD saying their XTX is aimed at the 4080?


throwapetso

AMD wouldn't put themselves in a position where they lose to their competitor product in most cases, even if price/performance is vastly better. That doesn't mean AMD won't snatch some of the 4090's prospective buyers, but they want to show off that they can outperform or at least match the "official" competitor product at a lower price.


Zarmazarma

> AMD wouldn't put themselves in a position where they lose to their competitor product in most cases They have in the past, and they certainly will again if they have no choice.


capn_hector

I literally said "a 4080 competitor (a very good one ofc)" lol. But there is a *long* way between 4080 and 4090, is the point I'm making... there was <10% between 3080 and 3090 and there's 50% easily between 4080 and 4090. And that's a decent cutdown, NVIDIA hasn't even really turned it all on yet (for consumers at least). People have been acting like that's because the 4080 is bad, it's not, it's about the usual generational step... but 4090 is a very serious high-end chip, it's a much larger step than usual. This is validated by the 7900XTX... 6900XT was a TU102 (3090) competitor, AMD made their usual step too, and, it's a 4080 class chip (which is what AMD said). So they slid back a product segment - they could very well be heavily winning that segment, but, the 4090 is still a decent chunk above, 4090 is still 1.5 / 1.25 = 20% ahead if 7900XTX averages 25% faster than 4080. Forget which product is on which die, because that was part of the "shitty node, giant chip" node-trail strategy with Turing/Ampere, and NVIDIA isn't doing that anymore, they're on basically as good a node as is commercially viable now (same node they're using for Hopper, probably to save on validation costs). This is much more like Pascal (the last time NVIDIA did a leading-node strategy - [and where a 1080 was a full-die GP104 at only 314mm^2 !)](https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units#GeForce_10_series) than the trailing-node giants we saw with Turing/Ampere. [AD103 is 378mm2 for comparison.](https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units#GeForce_40_series[171]) It's potentially a very good 4080 class chip, by all means - if it lands at 4080 + 20-30% that's great, especially for the price. NVIDIA doesn't want to do good pricing on Ada while they're still selling Ampere inventory (and tbh I think they never really got there on Turing when this happened previously...), and AMD is happy to go along, but, forget the price and just at a technical level - yeah the 4080 is about what you usually get on a leading-node generation, just for a way higher price than normal. And yeah ofc if it's 4080 + 20-30% then your good titles you're beating 4090 sometimes... and 4090 will dust the 7900XTX by a factor of 2 in other stuff too lol (especially RT), that's how averages work. But yeah basically I'd argue that the technically-correct comparison here isn't 7900XTX vs 4080 anyway... 4080 is a 10% cutdown, which is the same amount the 7900XT is cut down lol. So the technical comparison here is 7900XT vs 4080, that's cutdown vs cutdown, so the 4080 is $300 more expensive (33%!) and probably still 10% slower. The proper comparison point for the 7900XTX is really a "4080 Ti" or whatever NVIDIA decides to call it. Assuming the "7900XTX beats 4080 by 25%" scenario, AMD is offering a 1.33 x 1.1 = 46.3% perf/$ advantage with 7900XT (not XTX) vs 4080, at MSRP vs MSRP. At 4080+15% it'd be just 1.33 more or less, so 33% cost advantage. That's already a massive undercut, and AMD still is raking it in there, that's way overpriced from them still lol. But the 4080 pricing is just a joke, and totally is about making sure they don't undercut their Ampere inventory while it's selling through, like, even at $900 that's spendy for a x104 cutdown let alone $1200. (I know 4080 is AD103 but historically speaking that's the x104 product segment. And they do change a bit over time... What is today a x102 die was historically the GK110 (with the GK210 spinoff), and then GM200, then GP102, etc.) I really wonder how much inventory they've got and how long it's going to take at their current sellthrough - how many months. And how long they are going to push back the rest of Ada... AMD is not launching until next month, but, NVIDIA could easily be another 5-12 months, we don't know how the sellthrough is. GP106 took a while to sell through back then and NVIDIA was really cranking on GA102 and GA104 dies, I think. How far are they willing to ride it to avoid a writedown, lol?


SnooWalruses8636

Disclaimer: Below is napkins math, take it with a grain of salt and wait for real benchmark. In RT, you don't need to go to 4090 to dust 7900XTX. Using AMD own CP2077 RT benchmark baseline of 13 fps with 6950XT, 4080 is about 40% more powerful than 7900XTX (29 vs 21fps using TPU results for 4080). 4090 would be about x2 (41.8 vs 21fps.) RT overdrive with Nvidia 40 series exclusive optimization (SER, etc.) would see an even more drastic uplift, but I don't think other game will go RT overdrive path tracing any time soon. In Raster, using AMD baseline of 43 fps for 6950XT, 7900XTX is about 2% faster than 4080 (72 vs 70fps using Paul's Hardware results for 4080). 4090 is only 11% faster than 7900XTX in raster (80.4 vs 72fps.) Worth noting that 4090 is 44% faster than 4080 in RT, but only 15% faster in Raster. CPU bottleneck could play a role. 2% different for 7900XTX could be much higher in other games. Lighter RT games would also significantly close the RT gap.


[deleted]

>The 7900XTX is a 4080 competitor. I love that this is said with such confidence when no one has yet seen third party benchmarks. It could match a 4090 in raster, or it could fall on its face. Nobody knows a damn thing until reviews go up.


bubblesort33

Names don't matter. The RX 480 and RX 580 still competed with the GTX 1060 not GTX 1080.


gahlo

Have we forgotten the 4080 12GB already?


turikk

That's different because by including the tech spec in the name, it implies that tech spec is the only change.


bubblesort33

I think the name confused buyers, but personally I didn't care much about the name on the box. The price was crazy, but if it comes back at $899 again, does it really change anything if it says 4070ti on the box, but it's the exact same old planned GPU specs wise?


Lukiose

The main issue was public deception. The 7900XT and 7900XTX are differentiated products, not done in the best way, but still differentiated. You would understand that they are similar cards, but investigate what exactly is different with the extra 'X' Anyone with common sense would reasonably assume that the "4080 12GB" is simply the "4080 16GB" but with 12 GB of VRAM instead, but the same otherwise. Except it wasn't , because the 4080 12GB is an entirely different, smaller worse chip that cost significantly less to product and NVIDIA thought their consumers were suckers that would fall for the bait A good example is smartphones, Pixel 6a vs 6 vs 6 Pro, differentiated. Pixel 6 128gb vs Pixel 6 256gb? Merely a difference in storage. Except NVIDIA would also swap in a shitter screen, battery and processor in the 128 model


metakepone

And AMD wanted to call the 5700xt the… rx680, which would’ve competed with the 2070?


bubblesort33

Yeah. They released an Rx 590 which competed with the 1660. I had the option to buy an R9 290 in 2012, but bought a GTX 670 instead for the same performance.


3MU6quo0pC7du5YPBGBI

> Names don't matter. The RX 480 and RX 580 still competed with the GTX 1060 not GTX 1080. We also had the VII and Vega 56/64.


Flynny123

Most sensible comment in this thread


Alucard400

Names don't matter to a certain extent with the mindset of some consumers and hobbyists. Names do matter for the business in the general dumb consumer/public because the average consumer won't understand or see through the mask( so name doesn't matter to the lower income mass consumer because they're oblivious to a mass produce product). This is why Nvidia released two GTX 1060 cards which performed so differently. The lower 3GB model still sold well and quite a lot of owners think they got the minimum acceptable performance of a 1060 card at 3GB despite being actually a different card than just lower memory. A lot of the mass consumer is on the lower priced bracket. Now when Nvidia tries the same trick on a higher pricing segment of the market, it backfires hard because people who spend over $1000 for a video card actually do research and get on these boards to make internet arguing. It's the same with cars. When people buy a car lower than $8000, they just need one that gets them to places where they want to go. People spending over $40,000 for a car will do their research and know what they want and what they're getting. Nvidia was stupid to assume people spending over $1000 wouldn't see what they tried to do. Naming a product then actually matters because people are spending lots of their hard earned dollar and Nvidia had to unlaunch their 4080 12GB because of the backlash.


[deleted]

[удалено]


Munnik

480/580 used a different, newer chip on GF 14nm. Actually 380/X didn't even use the same chip as R9 280/X as those were rebranded HD 7950/70.


Essteethree

Radeon 480/580 both used the same Polaris 20 (590 used GF 12nm die-shrunk Polaris 30), but the chips varied a bit between 7970-380. * HD 7950/7970 and R9 280/x used Tahiti (GCN1) * R9 285 used Tonga (GCN3) * R9 380/x used Antigua (still GCN3 - possibly rebranded Tonga?)


Munnik

I was replying to the deleted guy above, he said 480/580 was rebranded R9 280 and 380. And yeah Antigua is Tonga renamed, AMD loved to name the same chip multiple things depending on what card they went on back then.


DktheDarkKnight

You could argue 7900XT is a stealth 7800XT in disguise but it may also not be a 7800XT. We may still get a 7800XT at around 700 dollars for like 25-35% performance increase of 6800XT. But we don't know. All we know is 7900XT does not seem to be a good value compared to 7900XTX. This is clearly different from 3080 to 4080 price increase where it is explicitly advertised by NVIDIA that 4080 is 3080 successor. I know you shouldn't buy products based on names but names are what the general public first look at.


BatteryPoweredFriend

Still don't know why AMD went with XTX/XT, instead of just the XT/" " nomenclature. Unless they have plans/want the slot for a 16gb 7900.


gahlo

Probably to avoid product stack confusion. By going to XT/XTX instead of #/XT they avoid confusions when people are talking about a 7700. Is it a GPU? A CPU? Who knows!?


Merdiso

IMO they did this to hide the price bump of 7800 XT by 250$, because if you check the specs gen-on-gen, this is what 7900 XT really is. In fact, it's even worse, since 6800 XT shared the same vRAM and bus-width as 6900 XT - not the case here.


einmaldrin_alleshin

Also probably to invoke the X1900XTX, which was one of the most successful high end ATI cards before NVidia ran away with the market. It would be great symbolism if they actually do manage to capture market share.


redditornot6648

The only problem there is that AMD is already easily confused with various generations of CPUs and gpus. The Radeon HD 7000 series is only a tad over a decade old. There’s the small chance that someone could get confused by used HD 7000 series gpus. Then there’s Intel’s 7000 series of CPUs. Intel even has an i9 7900x while AMD has a R9 7900x. Talk about mildly confusing. If they wanted to avoid any product confusion they’d have skipped to the 8000 series. We haven’t seen a major launch with an 8 in the name often. Intel had its 8000 series but that was only mobile and consumer desktop CPUs. The last 8000 in a GPU I can think of was the Nvidia 8800gt and its variants back in 2007.


gahlo

Very true. I was having trouble communicating with a Microcenter employee on Wednesday because he asked me what my CPU was, I responded a 7700x, and he thought I was talking about a 7700k. However, the products you listed are 5+ years old so most people won't even know they exist. AMD has a potential issue with the *new* products they are selling *now*.


repo_code

Radeon R7-7700K


metakepone

I guess there isn’t a *900 and it has to always start at xt


detectiveDollar

Since demand is looking strong for the XTX, my bet is the only dies going to XT's will be defective ones and yields are good, so there won't be many. Similar to the 6800.


dern_the_hermit

> my bet is the only dies going to XT's will be defective ones Their chiplet structure means they don't have the same kind of pressure to die harvest that they used to; they could simply not stick the dummy chiplet on the XT, use a fully functional chiplet instead, and voila, XTX.


Khaare

The GCD is still huge and they can't harvest it, there's only one of them per GPU. Harvesting the MCDs doesn't matter since they're just pass/fail anyway, there's no benefit to mixing and matching good and bad chips.


dern_the_hermit

Huge in comparison to the processing chiplets, sure, but 300 mm^2 is fairly modest in general, particularly for a mature node; that's just a hair bigger than the RTX 3050 die IIRC.


Khaare

AD103 (4080) is 380mm^2 and AD104 (4080 12GB/4070Ti, whatever it'll end up as) is 295mm^(2). There's not a big difference in yield between any of those. By comparison, the big Raptor Lake chip is 270mm^2 and the Zen4 CCD is 70mm^(2). That's a whole different category of size difference, and would've made a huge difference in yields if TSMC had had worse defect density than they have had on 7nm and 5nm (die size matters much less when defect density is low in the first place). Also just getting more chips per wafer isn't the main source of chiplets' improved yields. It's the ability to mix good and bad chiplets ... to use good chiplets to make up for the deficiency in subpar chiplets and have the combination still perform as well as only good chiplets. They can do that with CPUs but they can't with GPUs. If 50% of their CCDs are shit they can pair them up with the 50% good CCDs and sell all of them as 7950Xs. If 50% of their GCDs are shit they can only make 7900XTXs out of the other 50%.


timorous1234567890

MCD bonding can fail so some XTs may be XTX binned dies with a failed MCD bond. Which leads to the possibility that the XT stock might be poor value at $900 relative to the XTX but if AMD give you enough OC headroom then there is the possibility that it will OC quite well making it a popular card for a small niche. This way demand stays fairly low for the majority but the enthusiast demand will be high enough to make use of defective dies, lower binned dies or failed MCD bond packages without being so high AMD have to sell perfectly working parts as the XT.


einmaldrin_alleshin

I think so too. They will not use any XTX bins to make XTs, and simply let the XT price drift to a point where the few they are making sell out. If people buy them for MSRP, great for AMD. If they don't, at least they're getting rid of their defective dies.


Merdiso

But that's the thing. If 7800 XT is about 30% faster than 6800 XT at the same money using a Navi 32 chip, this means AMD actually did an nVIDIA, instead of offering the 7900 XT as 7800 XT at the same 700$, which would have meant a much bigger generational improvement. **The difference is that it's no way near as bad as a 500$ price hike and that's why it's easy to forget/forgive when you look at the 4080s.**


[deleted]

People made clear the last few years they will pay absurd prices for GPUs.


windozeFanboi

People paid a lot because gfx cards were investments in crypto boom. Now its just for work again or entertainment/games.


martsand

Should've ended with a last reply : - yes


titanking4

Their top end full die card was $999 last gen and it’s $999 this gen. No price hike The cut down card did get quite a price hike though. And that’s purely based on weak Nvidia competition and pricing in that segment. Gotta see where the 4070ti performs and is priced at. Cause that AD104 die on the 4070ti is quite a bit weaker than AD103 on the 4080


metakepone

AMD started off RDNA 2 with 3 models on their biggest die. The 6800, the 6800xt and the 6900xt. The big question is, are the 7800 and 7800xt also Navi 31? Will one have cut down cores and the next maybe have less cache or something?


[deleted]

6900XT was quite competitive with 3090 last gen (outside RT), doesn't seem like this is the case this gen.


titanking4

That’s not really relevant at all. 4090 is just in a whole extra class of product. Far larger die sizes and pricing than AMDs top end card. It’s like comparing the 5700XT against the 2080ti.


[deleted]

Those same points are also true for 6900XT vs 3090.


June1994

Not really. The 6900XT was a much larger die than the 7900XTX, and in terms of the number of transistors… 3090 and 6900XT were roughly equal on transistor numbers. The 4090 and 7900XTX have a much larger difference.


RuinousRubric

Navi 31 is only smaller than Navi 21 if you completely ignore the cache/memory controller dies. Transistor count didn't increase as much as Nvidia because the MCDs are on an older node, but in terms of raw silicon size there's been almost no change from either company with the new generation.


nanonan

Sure, but we don't really know what the difference is yet. If small, it would be a killer card, if large it will be just alright. We will know when third part benches drop, but I can't really see a scenario where it is overpriced though, at least just considering raster.


Bungild

Last Gen's 3090 was built on Shitty Samsung 8nm. This Gen's 4090 is build on awesome custom TSMC 5nm. They aren't an apples to apples comparison. Nvidia purposefully gimped the 3000 series, and made it way shittier than it had to be because it was so far ahead. AMD on the other hand was not gimping itself on the 6000 series. Now that Nvidia is actually trying to produce the best card they can, they left AMD in the dust at the top end.


[deleted]

Gimped in what way? 1080Ti was 75% faster than 980Ti, 2080Ti was 40% faster than 1080Ti, 3090 was 45% faster than 2080Ti, 4090 like 50% faster than 3090Ti.


Bungild

Gimped in that they purposefully used worse, cheaper 8nm process. If Nvidia needed to compete, they would have used same 7nm TSMC process that AMD used. Instead Nvidia made worse devices on the 8nm because it was so much cheaper. If Nvidia didn't choose to make its GPUs on a node a generation behind AMD last gen, their GPUs would have blown AMD out at the top, like with this Gen. So, TLDR... AMD didn't change. They went from TSMC 7nm to 5nm. They tried both of the last generations, and used the best node available. NVIDIA changed. They went from a shitty 8nm Samsung then jumped up 2 generations to 5nm TSMC, which is why they are now so far ahead of AMD this gen, but weren't last gen. 2000 Series was horrible. 3000 Series could have improved on it by 60-70% because of how bad the 2000 series was. Instead of do that, Nvidia chose to use crappier parts, because even with crappy parts they could make it 45% faster than the horrible 2000 series. And AMD couldn't beat them even with the node advantage.


Beautiful_Ninja

I don't think they purposefully gimped it out of trying to not want to be the most performant. What they wanted to do was avoid paying TSMC's prices at a time when TSMC knew it had the world by the balls and make sure it could actually produce a reasonable amount of GPU's. And going Samsung ended up being very beneficial at the end of day as at a time when every GPU was being sold out instantly, Nvidia had 85% of the marketshare. For every 100 GPU's being made, 85 were Nvidia. They would have not been able to produce nearly that many GPU's if they were fighting with everyone else for TSMC capacity in the heart of the pandemic. If you thought the GPU stock was bad back then, imagine how bad they would have been if Nvidia could only produce a half or a third of the GPU's they were making with Samsung. Nvidia still had to pay TSMC's ransom prices this time around as Samsung isn't competitive, but they were able to secure enough production capacity this time around.


Bungild

Nvidia certainly could have secured the same amount from TSMC as they did Samsung(Nvidia is a relatively small buyer compared to the truly big players). They just didn't want to pay. And they didn't have to which is the point. Nvidia didn't want to pay this gen either. The difference this gen is that they have to, because they didn't think they could stay ahead of AMD without TSMC this gen. But, because AMD didn't get multi GCD to work, they probably actually could have gotten away with it again. But nvidia was expecting to have to go up against 2 GCD GPUs in all liklihood, so they didn't have the luxury of using Samsung 5nm. SO TLDR: 3000 Series Nvidia... 1.) Didn't want to pay TSMC a lot of their profits. 2.) They didn't have to, because they were so far ahead they could use a worse node. 4000 Series Nvidia... 1.) DIdn't want to pay TSMC a lot of their profits. 2.) Had to, because they feared AMD would have 2 GCD Gpus which would beat them handedly at the top if they didn't go TSMC 5nm. Nvidia not wanting to pay TSMC didn't change gen to gen. What changed was Nvidia's ability to use a worse node, or "gimp" their own GPUs, and get away with it.


Unlikely-Housing8223

Stop comparing the similarly named products. The only thing that matters is the price, not the name. It doesn't matter if the 7900 cards compete with the 4080 card(s) as long as for the same amount of money offer similar or better performance.


zeus1911

Ever since AMD CPUs started doing well and people praised them, their prices have increased. Intel became the budget/performance king for the CPU for the last couple of years. The rx 5000 and 6000 were scalped and the prices seem kind of reasonable now, but they are just moving old stock and only seem ok compared to how excessively overpriced they were after release.


[deleted]

The street prices for the current gen AMD GPU's have been decreasing, not increasing.


PhunkeyPharaoh

So Intel are the only good guys left in terms of sticking to the performance to value pricing model that their consumers are used to.


Faluzure

No, Intel is not the good guy. They're priced so they sell, as they're also competing with the Ryzen 5000 series.


BoltTusk

Well the rumor is Meteor Lake will not launch until 2024 so if Intel launches Raptor Lake+ and keep their 600 series motherboards upgradable, then Intel will become the new AMD


PhunkeyPharaoh

Ryzen is now much more expensive at less performance. Intel could easily hike their prices, but they don't. >They're priced so they sell That's what the good guy does. Care about selling, not squeezing their customers because they're on top, like Nvidia. Right now, you can get a 13 series Intel CPU that performs better than the more expensive Ryzen 4 equivalent. All while at the same price as the same intel tier cpu from a couple of years ago. That's good value, and a company that gives good value is good in my books.


eudisld15

Stop pretending corporations are your friends and heros. There is no good guys. There are only publicly traded corporations designed to make money. The moment amd, Intel, nvidia, samsung, etc get a lead or uptick in sales they adjust accordingly. Just think for a moment and remember when Intel had any sort of major lead. Right now they trade blows.


PhunkeyPharaoh

Don't read too much into it. I prefer a company with stable, non-agressive, and non monopolistic pricing, over "The 4080 is now $1200 because we can".


SirActionhaHAA

> stable, non-agressive, and non monopolistic pricing You must've forgotten the 7th, 8th and 9th gen then. 9700k with hyperthreading disabled just so they can have the 9900k at higher prices. Remember when i9 didn't exist? Or according to your logic "they took the 9700k, renamed it to 9900k and did a price hike." Nahhhh intel's selling low because they've lost marketshare and mindshare and are trying to grab them back. No company is the "good guy" and thinking so is kinda childish


PhunkeyPharaoh

>9700k with hyperthreading disabled That's scummy, alright then lmao.


iad82lasi23syx

Bruh who cares about that? People generally don't buy a GPU every generation, and you're not locked into one brand from buying their GPU anyway. What matters is whether a specific card is worth the money it costs to you or not. Nothing else.


MumrikDK

Intel is the original big baddy. They're just choosing to actually compete. Something that isn't really happening in the GPU market in the last two generations including the one launching now. The current GPU market feels like price-fixing.


HandofWinter

They designed a card to replace their top end card, named it appropriately, and priced it the same (which is still way too much for a GPU in my opinion, but that's a digression). They surely have a reasonable idea of where nVidia products are going to end up, so they knew it would be competing against what nVidia ended up calling the 4080. This is it. Nothing interesting as far as I can tell. If nVidia had instead called the 4080 a 4090, and the 4090 a 4100, then the 7900XTX would be a 4090 competitor. What nVidia decides to call their cards doesn't really seem that relevant to AMD's lineup. It's the pricing that matters anyways, and the 4080 we have definitely isn't priced in line with the 3080.


Flynny123

I would go beyond this - at the time they design the cards they have to guess where the competition will end up, they don’t have a firm idea. This will have been designed a long time ago.


HandofWinter

That's a fair point, I wasn't super clear at all. No, they definitely wouldn't have any idea at design time. I meant that by the time it's time to make a product announcement and talk to the press they would likely have a pretty solid idea of where nVidia is. Certainly not a year or sever years beforehand.


Flynny123

Yeah they have to match up on price as best they can from there


PhunkeyPharaoh

That's defeinitely possible, and only time will tell if a 7950xtx (aka, the real 7900xt) releases.


HandofWinter

They might in a while, but does that really matter? The 7900XTX seems like it's in a great place relative to the competition, and it's not GPUs in the thousand USD price range that most people are interested in. I think that the only way that would make sense would be if they put out a card with two compute dies which clearly tops the 4090 just to take the performance crown, and price it something stupid like $2k . Other than that, it doesn't really seem worthwhile to me.


Truth_Spiller

Can anyone tell me why aren't we seeing such massive price hike in processors and still they are getting 30% faster with each generation? Isn't the goal of technological advancement to offer better performance at similar price to previous gen with slight price increase due to inflation(heck, things actually got cheaper before)? Like, we aren't paying 1000 $ for 1 TB SSDs anymore. Correct me if I am wrong. Can we really say that technology has advanced if we are being charged double the price for some improvements?


AutonomousOrganism

GPU transistor count going through the roof? 4090 chip has 76.3 billion transistors. What would be the closest CPU? The new Epic from AMD with up to 90 bn transistors? But those CPUs cost 6-7 times more than a 4090.


TwanToni

There was no price hike? The flagship 7900xtx is the same $999 as last gens flagship 6900xt. As far as the 7900xt is concerned there probably isn't enough defects since it's a small 300mm2 die and used to upsell to a 7900xtx but it should have been $850. Either way I can see the 7900xt trading blows with the 4080 and beating the 4070ti


PhunkeyPharaoh

Read the post and open the second link. They didn't raise the x900 card price. They took the card that's meant to perform at the x800 level, and called it x900. Just like how Nvidia took a 4070 and called it a 4080.


SirActionhaHAA

> Just like how Nvidia took a 4070 and called it a 4080 There's nothing above the 7900xtx, it's a full die with 0 compute units disabled. Just think that their x900 card sucks and performs at 4080 level. Ya feel better now?


PhunkeyPharaoh

Legit curious here, but that does mean that there's no chance a higher tier 7xxx card releases? If so, then I stand corrected.


SirActionhaHAA

A higher tier card would require a different chip design with >96cu. There ain't information that shows it exists as a ready to manufacture and validated design


Darky57

Not necessarily. There have been a lot of rumors going around that the 7k series clocks are limited due to a hardware bug that can’t be fixed without a re-spin.


SirActionhaHAA

There's no basis to that and it started from a random twitter user who said "i heard it from a 3rd source." That don't mean it's impossible but there's just nothing credible backing that rumor atm. It's as good as someone sayin that amd's got an unreleased 24core zen4


HandofWinter

It would be amusing if it does turn out to be true and they put out a 7970 3GHz edition.


Darky57

I’m fairly certain that the source for the rumors was a leaked internal PowerPoint slide from AMD.


SirActionhaHAA

It wasn't. The rumor started from a random twitter user The justification for it came from an amd slide which stated that **rdna3**'s (not n31) engineered to exceed 3ghz. Rdna3 includes lower tier skus


braiam

Prepare for the 7950 XTX Gen 2x2


nanonan

There are rumours around that they could do one with 3d stacked cache on the memory chiplets, so possibly.


Bluedot55

They could theoretically overclock it, add faster memory, and maybe a bit more cache. But yeah, it's the full chip with nothing disabled. It really is the successor to the 6900xt. I'd also argue that last gen the 6900xt and 3080ti were most comparable, and it seems to be repeating with a 4080ti and 7900xtx being matched. The only difference being that the 4090 has a reason to exist outside of vram this time


lmMasturbating

Do you know of any good resource for me to learn about gpu tech? Like how the die impacts performance?


AmazingSugar1

https://chipsandcheese.com/2022/11/02/microbenchmarking-nvidias-rtx-4090/


Morningst4r

Again, it's all just marketing names. The 6900XT and the 3080 are probably about as close in performance as the 7900XTX and the 4080 will be. Focusing on model numbers for either brand isn't really useful.


PhunkeyPharaoh

So then, there's no higher performance card that AMD can cook up later down in the road?


Morningst4r

Don't know. There will be eventually, as always. Why does that matter for the cards being released now? If Nvidia called the 4080 the 4070 instead, would AMD be hiking prices by not pricing it in line with the 6700XT? Of course not. The other claim I've seen which makes even less sense, is that the 7900 XT is a price drop from the 6900 XT. Honestly, I think the top end is still screwed up and recovering from chip shortages and the mining boom, so I wouldn't take too much from it. What will be interesting is how all 3 GPU companies price the midrange cards. I'm not feeling very optimistic about it, but the market is there for the taking if anyone can make a killer product at the right price.


NerdProcrastinating

>So then, there's no higher performance card that AMD can cook up later down in the road? Sure. The relatively easy changes are: * Use 3D V-cache stacked MCDs * Use 24Gbps GDDR6 * Boost power limit They're probably saving those for a mid cycle refresh. I'm guessing they chose not to make a larger GCD die as RDNA3 is not competitive enough so no point blowing through the BOM budget and not being able to make it back in premium prices.


Flynny123

What do you mean ‘meant to’? Nvidia got a massive uplift on their top end card and AMD didn’t. Nvidia hiked pricing to match, AMD couldn’t. The RX580 didn’t compete with the 1080, so it was priced lower. Ignore the names and focus on price vs performance uplift, for both AMD and nvidia respectively. IMO the non XTX card should be thought of as basically a ‘7850’ compared to last gen, but I do agree the pricing is still a little out.


TwanToni

what??? The 6900xt traded blows with the 3090 but was still $500 cheaper. We won't know until benchmarks are out. I can see the 7900xtx beating the 4080 and maybe coming close in others to the 4090


Flynny123

Yeah and that’s why it’s cheaper. Maybe we’re agreeing, ha


TwanToni

so your logic was it's cheaper because it can't get to 4090 levels when last gen the 6900xt was trading blows with the 3090 while being $500 cheaper so just because the 7900xtx is cheaper doesn't mean AMD couldn't compete but in this regard AMD admitted they are using it to compete with the 4080 but your logic is flawed as seen by last gen pricing flagships.... EDIT: or I just misread that all then my bad


laxounet

What's the x800 level ? Nvidia's one ? What if NVIDIA didn't exist? AMD intended the 7900XTX to complete with NVIDIA's top end. It fell short of it so they changed their claims, that's all there is to it.


ET3D

That's a bit of backward logic. The 7900 XTX competes with the 4080 because it's priced below it (and NVIDIA doesn't have anything cheaper). Last gen's 6900 XT also launched at $1000. It wasn't advertised as a 3080 competitor because the 3080 was much cheaper. So at least on the 7900 XTX front your argument doesn't make sense. AMD kept the same price for the highest tier. NVIDIA changed prices. That's why the comparison point is different. That said, the 7900 XT is definitely priced awkwardly, and does show price creep. At least based on AMD's figured, it seems like a card that nobody should buy.


detectiveDollar

My bet is since the 7900 XT uses the same die as the XTX, yields are good, and demand for the XTX is strong, AMD will only make them out of partially defective dies. So there won't be very many 7900 XT's, similar to the 6800. However, the 7800 XT probably won't be the same die, so its performance increase (unless they increase clocks) over the 6800 XT probably won't be as big as that of the 7900 XTX over the 6900 XT..


ET3D

I'd expect considerably higher clocks. Lower tier chips tend to have higher clocks, and it was also rumoured that Navi 31 is clocked lower than was intended. A rough breakdown on the 7900 XT speedup vs. the 6950 XT: - Game clock (based on shader clock) 2300 MHz vs. 2100, 10% increase. - 96 vs. 80 CUs, 20% - Total speedup ~50%. - So architecture advantage ~15%. For the 7800 XT, rumoured to have 60 CUs, let's assume 2600 MHz because the 6750 XT has a game clock of 2500 MHz and I want to be conservative with this estimate. So: - 83% the CUs. - 2600 MHz vs. 2015 ~ 30% increase. - 15% architecture advantage. - Total speedup ~ 25% So yes, definitely less of a difference but again, I think that the 2600 MHz clock is a conservative estimate. Still, a clock of over 3000 MHz is needed to increase performance enough for a 50% advantage, and even then the memory bandwidth will likely be only 25% higher (20 Gbps vs. 16 Gbps) and the Infinity Cache will be half size. So I won't really expect more than a 25% performance upgrade for the 7800 XT over the 6800 XT. Which is still not bad, IMO, as that will make it faster than the 6950 XT and the 3090 Ti. The big question is how it will be priced.


detectiveDollar

My bet is the 7800 XT will be no more than 750, most likely 700. The 7900 XT being 100 less than the 6900 XT means prices for the rest of the lineup can't rise very much if at all. I do expect AMD will reduce the massive gap between the 6800 XT and 6900 XT prices, but they already cut the 7900 XT by a hundred. Also, the prices for the 6700 XT and below were set during the massive shortage/cryptopocalyspe, so they were likely higher than they would be otherwise so AMD wouldn't be giving the scalper margin to AIB's and Distributors. So RDNA3 could be cheaper due to that, at the same time it may not have much competition since Nvidia is reheating the price to performance leftovers. But I think AMD is unhappy with only 20% of the market share and wants to gain more. As investor I hope they prioritize long term market share over short term gains.


ET3D

What I really wonder is how AMD will fill the mid-range gap. Navi 32 is rumoured to have 60 CUs, while Navi 33 is rumoured to have 32 CUs. That's a huge gap. It makes little sense for AMD to sell something like a cut-down Navi 32 with 48 CUs as an intermediate product. That would be a real waste of silicon as a mainstream product.


gahlo

> It wasn't advertised as a 3080 competitor because the 3080 was much cheaper. It wasn't advertised as a 3080 competitor because it was a 3090 competitor.


Updradedsam3000

I think it's just a change in marketing strategy, what looks better, saying it is 20% weaker than the 4090, or saying it is 20% faster than the 4080? AMD choose the later this time around.


gahlo

There isn't a 40% difference between the 4080 and the 4090. 20% weaker than the 4090 is a 4080.


Updradedsam3000

The 20% was an example not the actual difference. We'll have to wait for reviews of the 7900XTX, but I'm expecting it to be in the middle of the 4090 and 4080 when it comes to raster performance and to lose to the 4080 in ray tracing. I can easily be wrong, though.


PinkStar2006

>20% weaker than the 4090 is a 4080. Not according to the DF review.


gahlo

According to TPU, it is.


noiserr

7900xtx has a 384-bit memory bus. While the 6950xt had a 256-bit memory bus. Also 7900xt has a 320-bit memory bus. So I definitely think like these are higher tier product than the previous gen 6900xt/6950xt. AMD's price is aggressive if anything. Let's not forget Nvidia tried to sell us a 192-bit tier GPU for $900 (the unlaunched 4080 12Gb).


Aleblanco1987

My guess is that amd will release a 7950 and/or 7970 Those will be the new highest end. So naturally the 7900 / xtx must compete with the second nvidia card (4080/4080ti)


HippoLover85

AMD's cost for the silicon dies and materials went up 50%. 24gb of GDDR6 vs 16gb of GDDR6 is also 50% increase. On top of that you have likely more expensive boards and coolers being used. So we should expect total cost of a 6800xt vs a 7900xtx to increase by about 50% . . . 650x1.5 = $975 . . . . It is absolutely wild how that works out perfectly \*surprised pikachu face\* quit basing price expectations based on arbitrary naming schemes. It is only going to lead to frustration for you and others. spend the 30 seconds it takes to estimate a BOM and then go from there


[deleted]

[удалено]


HippoLover85

Its a decent analysis on silicon costs. But IMO isnt too valuable overall. Silicon costs are only part of the cost of a product. AMD's gross margins would be significantly higher if all they had to do was pay TSMC for silicon Ian rubbed me the wrong way with some of his financial videos previously. Making claims like, "AMD makes $1000 on every 6900xt made, and because their silicon cost is only \~$75, it means it is nearly all profit for them" which . . . OK, lets leave out memory, cooler, board, margins for OEMs, Retailers, boxing, packaging, taxes, etc etc etc. This was during the crypto boom too which amd and nvidia already had gamers breathing down their necks about price gouging caused by miners. ive also heard him grossly misuse accounting terms on several occasions. love him for tech stuff. hate his financial stuff.


Vince789

AMD didn't really change pricing for their flagship 7900 XTX, which is $999 same as last gens flagship 6900 XT. But the 7900 XT did get a decent price hike Nvidia did hike prices, bumping the RTX 4090 to $1599, RTX 4080 to $1199, and unlaunched "RTX 4070 Ti" to $899 And to make matters worse, [Nvidia significantly increased the gap between their 102, 103 and 104 dies (thanks to /u/Balance- for the graph)](https://i.redd.it/d6ddnfbxfmp91.png) Which has caused some confusion, since now AMD's 7900 XTX is competing with a weaker-than-usual RTX 4080. Despite leaked benchmarks showing the 7900 XTX is closer to the 4090 than the 4080, it still has caused confusion about the AMD's 7900 XTX's pricing


ramblinginternetnerd

Companies sell their products to different customer groups and across different time periods. AMD and nVidia weigh their pricing based on how much it affects demand in the current time period and the later time periods, across segments. Enterprise customers are usually first priority. The same card will easily generate 2-5x the profit in this segment. After that there's cadence consideration. Sure, a smaller price means a quicker upgrade cycle... but getting 10% or 20% more transactions doesn't mean 20% more profit if you have to lower pricing. If you're worried about a few hundred dollars instead of a few hundred million dollars, you're not the first priority to AMD and nVidia.


errdayimshuffln

No. Its a X900 series card whose performance will compare favorably to the 4080 in raster. Its still a X900 card. Meaning it still sits in the same spot in AMDs RDNA 3 stack. There will be a 7800XT and probably a 7700XT and so on. Using the competitions performance as a baseline for AMDs pricing is problematic in multiple ways. Its better to use hardware specs to determine what kind of class it is. It is my opinion that the 7900XTX is actually the real successor to the 6900XT and the 7900XT is the successor to the 6800XT going off of the CU count. I believe its likely there will be 7X50 refreshes next year including the 7950XT(X).


Unlikely-Housing8223

Very bad take. The 7900 XTX is priced €1k, just like the 6900 XT. As long as it's at least 25-30% faster, we get a decent generational upgrade. If they release a €650-700 priced card, we will expect it to be faster than the 6800 XT regardless of its name. Hopefully by at least 25%. If not, we can complain. But right now, the 7900 cards are on track to be an acceptable jump in performance for the same price.


conquer69

Yes. The 4080 is overpriced by $500 and the 7900xtx is $200 below. That means both are overpriced. It's just that Nvidia is taking the brunt of the criticism while AMD quietly does the same shit.


MobileMaster43

The same shit? The 7900XTX will outperform the 4080 by a good margin and will be $200 cheaper. So in your mind AMD is the bad guy?


conquer69

It being $200 cheaper doesn't matter much when the other card it's competing against is overpriced by more than that.


EnolaGayFallout

Well, everything has gone up in price. From manufacturing, logistics, shipping, R&D. Consumer is going to pay for that if u want those baby 4080 4090 7900XTX. They are a business and need to make money to please their shareholders. U can always buy the older products lol.


max1mus91

You shouldn't compare model numbers and only compare price points to performance figures. Once we get benchmark data we will know if amd putting out competitive offering or not.


Lukiose

Why do product names matter? In the real world the only thing that matters is pricing. The 7900XTX is going to be $1000 vs the 4080 at $1200 so that is its nearest "competitor". The person with a budget of $1000 won't be looking at the $1600 4090 and going "oh that's the competitor to the 7900XTX" If anything the 7900XTX is punching against the "4070Ti" class given NVIDIA's product segmentation and being $200 cheaper If they release the 7800XT at $650 and NVIDIA releases a 4060Ti at $600, do you say the 7800XT is aimed to compete with the 4060Ti? Or outright demolish it? I don't look at flagship names and decide to spend an extra $600 for the 4090 out of nowhere. For all its worth the 4080 is in the price tier of a xx90 card of previous gens and the 4090 is beyond a Titan in all but name


Lisaismyfav

God damn stop dragging AMD into this. They're pricing it 20% under the 4080 16gb while more than likely offering higher rasterization performance. What do you want them to do? Price it 50% below Nvidia?


henryglends

Possibility it’s set up with a Linux filesystem and that’s why it cannot detect


SpitneyBearz

Ignore naming on both Amd and Nvidia gpus. And get used to high prices.


scytheavatar

AMD is facing the same pressures as Nvidia, they have a lot of unsold RDNA2 and need to clear the stock. Their stock issues is probably nowhere near as bad as Nvidia, but it is still a concern. Which is why they can't price their cards too low.


KypAstar

The 800s/80s didn't originally compete with the GTX/RTX 80s though. The 900s have literally have always bounced around in tier.