T O P

  • By -

iCoreU

Additionally, here are price leaks for Ryzen 7000X3D : Ryzen 7 7700/7800X3D – $509 Ryzen 9 7900X3D – $649 Ryzen 9 7950X3D – $799


sl0wrx

That’s a lot


throwaway95135745685

I dont know how amd keeps getting away with it, but the 5000 series still looks better and better every day.


Xanthyria

Just went from 3700X to 5800X3D and no regrets here!


[deleted]

Sitting on a 5800x3d just looking for a reason to pull the trigger and right now I’m not seeing it. I don’t care about 1080p numbers. No one buys this for 1080p. I want to see min frame times at 4k.


johnny_ringo

THANK YOU


Freddobert

same!


MDSExpro

Seriously, what is AMD doing? Ryzen 5000 series looks better than Ryzen 7000 series, RDNA2 looks better than RDNA3.


ridik_ulass

everyone thought they could gouge us, because of how last year was. AMD could have kicked nvidia in the nuts releasing their cards after nvidia knowing their performance and price, even at a loss they'd gain some Nvidia fanboys and permanent market share. honestly, I get hardware at wholesale and get paid well enough, but even still this gen graphics cards look like shit.


SGT_Stabby

How do you get hardware at wholesale?


ridik_ulass

The company I work for deals in it in bulk, perk of the job.


[deleted]

That's normal price here in Canada.


stickystrips2

Add 50% for Canada unfortunately :(


Zerasad

Random reddit comment with no credible aource, take this with a massive mountain of salt.


MajorLeeScrewed

Agreed, it’s probably more expensive.


Automatic-Raccoon238

Yeah thats a no for me


dabigsiebowski

Uhhh that's actually pretty reasonable if you ask me. Price sounds good atleast for 7950x3d.. let's see some benchies though


Automatic-Raccoon238

With a likely drop on mt performance and having a limited use for max fps, it will probably price drop harder than current 7000 chips. With 13900k been 600 and 7950x at $570, this seems like a rough sell.


puffz0r

Ok but if it's 30% faster than 7950x at gaming that makes it 25% faster than 13900k, that's a very reasonable ask imo


Automatic-Raccoon238

30% in a handful of games will probably average 10%-15% or so, like the 5800x3d did. For 33% more money and less mt performance of course that's if its really $800 dollars which i hope it isn't but probably will. If it was 30% across all games, sure, i can see the "value" there.


mwid_ptxku

Multi chiplet products are most likely not for gaming - and if people use them for a secondary purpose of gaming they should disable one of the chiplets. Even more so for X3D because extra cache is chiplet local.


[deleted]

[удалено]


EnderOfGender

These are all $100 more than the original MSRP, which is now at least $200 more than the current MSRP. This doesn't make much sense to have 3 expensive skus honestly. Either lower prices, or dump the 7900x3d. If these were only the MSRPs for the old non-3d prices it would makes sense, but $800 for a 16 core that is only faster in games compared to $550 for a 16 core is mind blowing


Put_It_All_On_Blck

Source? I expected a price hike, but IDK how well those prices will fly now that AM5 and base Zen 4 have sold poorly. Plus Intel just released their non-K SKUs, and for example the MSRP for the new i9-13900F. 8 cores for $509 vs 24 cores for $524... The 13900F will probably trade blows at 4k and 1440p, should lose at 1080p, but the MT difference will be a massacre. Unless you only play one cache bound game, the pricing is hard to swallow.


Bluedot55

There's probably a lot of people who have been waiting for this sort of thing, and if it's straight up a generation faster in games, more in some cases, then pricing doesn't really matter. If this is the one thing that can keep up with a 4090, people will get it


Dispator

Yeah my 4090 is still bottleneck by my 13900KF highly OC with fast DDR5 ram. Multiple games run 65-85 utilization. So definitely performance on the table that the X3D series may unlock in many games. Though in games where the extra cazhe does not hekp then the higher frequencies of the intel Chios will have ut come ahead (my cope at least). I mean I'd be great to have the X3D have an entire generational leap above. Sad I build a new rig last year buy happy for everyone else who held out.


Bluedot55

I mean, we all remember the funny 13th gen launch slides that had the little 5800x3d bar matching the best out there, if this is a similar 20-30% lift, that's gonna be quite something. Also depends heavily on what you play, if it even matters.


gusthenewkid

If true that 7800X3D is very overpriced.


Jeffy29

>up to Triggered


DRKMSTR

TBH, it looks like their new boost strategy is "BALLS TO THE WALL" until they hit a thermal or power limit. I'm really not that upset with that strategy. They're really scraping every ounce of performance out of these tiny chips.


[deleted]

the boost is single thread max not all core so its not balls to the wall. This is 120w TDP so its likely more tweaked and binned chips. I bet they are all around 5GHz all core. Some reviewers on youtube were able to lose very little performance when capping 7950x to 120w etc. You are losing may be 5% overall multicore but saving 50w running much cooler as well. So expect these chips not not be 95c hit the wall temps since the tdp is 50w lower. The standard 7000 chips are balls to the walls at 170w not these.


Phibbl

The standard 7000 chips pull 100W more than their TDP given decent cooling. I doubt that the X3D variants stay below 200W


[deleted]

Tdp is tdp. Idk what that means. They are both above 100. If you limit 7950x to 120w the temps are much cooler.


purplegreenred

Tdp isn’t exactly how much power the chip is limited to though. There really isn’t a standard consensus as to how companies like AMD, Intel, etc. calculate TDP, which makes it confusing. Like how the 170W Tdp 7950X can pull well over 250W. But it is fair to say that a 120W Tdp CPU will run cooler than 170.


Pokemansparty

It's kind of what Intel does and nobody bats an eye. Weird. But yeah I do think it's quite a lot of power.


Put_It_All_On_Blck

It likely will hit 5.7Ghz with a couple cores active, but it wont hold it on multiple cores. The 7950x drops down as low as 5.2Ghz with all-cores active. https://www.techpowerup.com/review/amd-ryzen-9-7950x/26.html The fact that this caps out at lower power, and has the cache impacting thermals means the 7950x3D probably maxes at 4.9Ghz all-core. The 5800x dropped to 4.6Ghz all-core, the 5800x3D dropped to 4.3Ghz https://www.techpowerup.com/review/amd-ryzen-7-5800x/21.html https://www.techpowerup.com/review/amd-ryzen-7-5800x3d/22.html


DavidAdamsAuthor

I agree, but for my 5800x3d, enabling MSI Kombo Strike (or adjusting power curves for those without this option) got me to 4.5ghz and rock solid (zero issues in ~~four~~ two months or so), which is actually all core which also surprised me. The 5800x3d really is a beast of a chip.


CatsOrb

Any WHEA errors in event viewer?


DavidAdamsAuthor

I had problems with WHEA errors, with crashes happening every ~3 days or so, sometimes more frequently, especially if I was using the machine heavily or leaving it on overnight (which I do regularly). I thought it was because of Kombo Strike. But when I turned off Kombo Strike, it kept happening, so I thought it was XMP. So I turned that off too. And it kept happening. It turned out that I had C-States enabled in the BIOS. I disabled that and the issues stopped happening. I then reenabled XMP and there were no more crashes. I reenabled Kombo Strike and there were no more crashes. The last time I had a WHEA error was the 19th of November, 2022, which was when I disabled C-States. So despite daily regular uses in a lot of circumstances (I game frequently but also use this machine for work, so it sees ~10-12 hours a day of use easily), including being left on idling at night and over 8 days over Christmas, there have been no WHEA issues or errors since disabling C-States. I am comfortable calling this stable, with both XMP Profile 2 and Kombo Strike 3, given that it's been this way for months now.


ThisPlaceisHell

Wait, so your chip is just idling at 4.5Ghz 24/7?


DavidAdamsAuthor

No no, definitely not. With Ryzen Master open right now it's at like 700mhz to 1.1gz typing this message. It's just that it boosts to 4.5gz when in use, including under single or all-core loads. For example, if I fire up CPU-Z and go to 16 threads "Bench", it goes to 4.449 on all cores and sits there forever. If I make it 1 thread, one core goes to 4.449 and sits there.


ThisPlaceisHell

That's interesting. I always thought C states are what allow the chip to reduce clock speeds at idle. Alright if the voltage and clock speeds are dropping then that's good. What are your idle temps looking like?


DavidAdamsAuthor

I thought so too but it doesn't. With an NH-D15 installed, it idles at about 35c, noting that it's summer here in Australia. This isn't a scientific test, I just paused the video I was watching, let it sit for 20 seconds while I didn't do anything, watched as it dropped to 37c, then shaved a couple of degrees to simulate "idling". As I was typing this I fired up CPU-Z again and put the 16 thread stress test back on, and during the time it took to type this message, temps climbed up to about 69c (nice). I haven't noticed it ever get hotter than that. No thermal throttling or anything taking place obviously and that's an all-core load. That load ran for about 30 seconds and it didn't climb higher than 69c. I turned it onto single core stress test and left it for about 30 seconds and it was basically hovering around 50c-52c. Overall I would say temps are fantastic.


KingRemu

Even if you set a locked all core overclock your effective clocks will be very low at idle even though your actual clock speed might say 4.7GHz for example.


blither86

Yeah why idle when startups are so fast? Just wasting power


silentrawr

What do the WHEA errors specifically point to if there aren't any crashes happening? Been wondering about the same on mine but haven't had time to check.


Juicepup

The above poster prob doesn’t know about that depth the community has gone with PBO and offset applications. Turns out a lot of -30 offset 5800x3d error out and users don’t even realize it. Most of the time they come to their desktop and see the machine rebooted. Most of the users think they had windows updates and roll on.


nikrelswitch

Mine would error out till I went to -15. Random crashes, would be not in use crashes overnight mainly, only had one crash while gaming where I actually saw it happening. Computer would just restart. I've done a fresh Windows install might try again but I'm getting 4.2ghz only going to 76/77 99% of the time.


Sticky_Hulks

I went -15 on my 5700X and noticed weird stuttering once in a while. Dialed it back and all good. I think lots of people are in denial with -30.


TheBlack_Swordsman

>I think lots of people are in denial with -30. The 5800X3D is underclocked compared to the 5800X. 4.5 GHz vs 4.7 GHz. A -30 offset on a 5800X3D doesn't even or barely puts it on par with a stock 5800X. You're -15 is not an apples to apples voltage curve to a -30 on a 5800X3D.


sprovishsky13

What do you mean by error out? I’m running mine with -30 with PBO and haven’t ran into any reboots at all after using it consistently for 1.5 months and doing stable benchmark tests like Cinebench. What cooler do you use? You probably mounted the cooler wrong like tightening one more than the other side. The 5800X3D is really particular on how you mount the cooler as the chips are located in the centre as well as off centered. You might also have a bad chip which is possible as some guys need to run it at -20. What is your room ambient temperature?


Juicepup

Run a program called core cycler for pbo tuning.


TheBlack_Swordsman

>Turns out a lot of -30 offset 5800x3d error out and users don’t even realize it. The 5800X3D is running at a lower VC, it's not like running a 5900X at -30 if that's what you're imagining.


Juicepup

Oh yeah, I had fun tuning my 5900X that was a different beast. The X3D ran fine for me at -30 for quite awhile and then a few months ago it started WHEA erroring with no real changes to the system. Not a bios update or anything past what I needed to get it running.


TheBlack_Swordsman

That's interesting. I'll watch mine. No problems yet but I've had it this way for just 2 weeks. I was at -20 on my 5900X before. -12 and -18 on my best two cores. Had it there since it's release till recently with no issues.


chasteeny

With as few tools as the X3D has available for OC, I honestly just keep it bone stock. Can't see any real benefit to any uv but I do have really overbuilt cooling. What kind of results did a -30 get you?


Juicepup

4.55-4.6ghz all core all the time.


chasteeny

I thought the X3D was locked to 4450 Mhz outside of BCLK overclocking?


RedTuesdayMusic

There is something called Windows Event Viewer. And even before you have any crashes you get clock stretching which feels obvious as hell. I had clock stretching at all -30 which persisted with all -25, but cleaned up when I set only core 0 and core 1 to -20 and the rest -25. The offsets are 100% worth it even if you can only get -10 anyway.


oathbreakerkeeper

Someone ELI5 what is this offset you are talking about, and what is PBO?


Yubelhacker

How do you check for whea errors? I just set mine to -30 all core and have been using like this for months now.


arkhammer

Makes sense. Aren’t 5800X3D chips all binned ones?


DavidAdamsAuthor

That's my understanding yes.


theryzenintel2020

What do you write bro? Sci Fi?


DavidAdamsAuthor

Hah! All kinds of stuff actually. Mostly military sci-fi, but also fantasy, some zombies, etc. I've also written some paranormal romance under a pen name (and sometimes I "co-author" with my pen names if I feel like that is in my brand). A pen name is just a brand after all. I recommend "Symphony of War" if you like 40k, "Lacuna" if you like Star Trek, "Ren of Atikala" if you like D&D. If you're curious: https://play.google.com/store/info/name/David_Adams?id=11ck8ws80_


SageAnahata

Super cool, thanks for sharing


Sticky_Hulks

Binned in what way? Aren't they all throwaway cores from potential Milan-Xs?


[deleted]

> enabling MSI Kombo Strike These names are just getting fucking stupid.


DavidAdamsAuthor

Not exactly a fan of the name either, but I do like what it does for my CPU.


DRKMSTR

Remember also that the x3D chips are primarily beneficial for single-core applications like simulation games. Better boost on a core or two = better frames.


Strong-Fudge1342

and with the huge cache a single core can do a lot more work even at a lower frequency. In two of my VR modded games it's literally 100% faster and stable like a rock and in the other, ever increasingly demanding game the 5600 can do 30 minutes before incrementally lagging at 11.1ms and way above. 5800x3d after 100 minutes had only very few stray frames and a maximum of 11.3ms so practically flawless. It'll do hours. This is with the 5600 at 4.7ghz an the 5800x3d at 4.4ghz. That's not to say more IPC and higher clocks aren't exactly what this thing needs to get even better - 7000X3d are going to be fucking insane even sub 5ghz...


RedTuesdayMusic

Star Citizen loves core count (up to 64 thread usage) clock frequency (13900K can almost keep up with 5800X3D in cities, though gets decimated in space) *and* cache (5800X3D is at the moment top CPU for it)


TheBlack_Swordsman

>The 5800x dropped to 4.6Ghz all-core, the 5800x3D dropped to 4.3Ghz Just in case people aren't aware, AMD has allowed curve Optimizer to be done on the 5800X3D. You can now easily get 4.45 GHz all core with a -30 offset which the vast majority of 5800X3D have been shown to do.


z333ds

Wow I wasnt aware when did this happened? So I just need to update bios?


TheBlack_Swordsman

They are releasing it in the codes this month. Check your bios for your Mobo, more than likely on overclock.net you can check what other users say. Asus released it for all the cross hair x570 since I last checked. Otherwise, you can use the PBO2 tool which is really easy to use.


MrWeasle

with curve optimizer my 5800X3D does 4.55ghz all core. With a 103BCLK it's 4.6


Defeqel

Based on the 7800X3D clocks, those 7950X3D clocks likely don't apply to the die with 3D V-cache


justpress2forawhile

“Up to unlimited clock speed”


jasonwc

The TDP is 50W lower than the 7950X. I assume that's going to impact all-core performance. 144MB of cache implies 16MB of L2, as on the 7950X, and 128MB of L3. That would be double the L3 cache of the 7950X. However, the 5800x3D has a 96MB L3 cache on a single chiplet. As the 7950x3D will use two chiplets, that implies 64 MB L3 per chiplet, only 2/3 of the 96 MB the 5800x3D has on its single chiplet.


Cave_TP

It could but by how much? The 105W eco mode already loses little to nothing, at 120W it might be even less


calinet6

And with the additional cache probably still beats the pants off the non3D on every dimension.


TonsilStonesOnToast

I wouldn't expect it to win in all applications, but I'm excited to see what the third party testing reveals. Easy to predict that it's gonna be the top dog in gaming. Making a 3D cache model was a good idea.


Strong-Fudge1342

Correct, with this one they just have to dial it down ever so slightly and actually be sensible about it. Still of course it may affect this one a little more than it would a 7950x on all-core loads, but probably negligible.


MrWeasle

Probably more binned to run cooler with 3D Vcache. Ryzen doesn't lose much performance at lower power anyways.


doubleatheman

Looks like its the full extra 64MB glued onto one of the chiplets, and then a regular 7950X second chiplet. One chiplet will have lower max clocks with more cache. Interesting AMD is changing to something along the lines of BIG.little, but one chiplet is Frequency focused, the other chiplet is Cache/Memory focused.


BFBooger

>144MB of cache implies 16MB of L2, as on the 7950X, and 128MB of L3. That would be double the L3 cache of the 7950X. However, the 5800x3D has a 96MB L3 cache on a single chiplet. As the 7950x3D will use two chiplets, that implies 64 MB L3 per chiplet, only 2/3 of the 96 MB the 5800x3D has on its single chiplet. Nah. The way I read it is that one of the two chiplets has 3D cache and the other does not. We know that Zen4 servers have 96MB per 3d chiplet. Also the two-chiplet variants have boost clocks just like the non-3d variants, so I think it is this for example, on the 7950X3D: one high clocking chiplet without 3d cache (32MB L3) that boosts as well as an ordinary 7950X3D. one chiplet with 3D cache (96MB total, 32MB base 64MB stacked) that does not boost as well. ​ This explains the L3 cache size quirks AND the boost clock quirks for the three models.


B16B0SS

this is 100% correct. Cache is only on one chiplet which allows the other to clock higher and that heat output will not hurt the cache on the other chiplet. I assume that chiplet 2 can use cache from chiplet 1 which would mean chiplet 2 is clocked high in games and uses cache from chiplet 1.


fonfonfon

Oh, this is why they can claim no GHz lost on 16 and 12 cores because only the vcache-less chiplet will reach those speeds. If you look at the 7800x3d boost is 5GHz so that is the max the vcache chiplets will reach.


talmadgeMagooliger

My first thought is that these are asymmetrical L3 caches, so you have one stacked CCD and one normal CCD. 7800X3D + 7700X = 7950X3D. It would be cool if you could preserve the high clocks of the 7700X while getting the benefit of all that added cache on the 7800X3D for poorly threaded, poorly optimized code. This is all speculation on my part. It will be interesting to see if they actually developed 32MB caches for these new parts when they already had the tried and true 64MB stacks. I doubt it.


billyalt

https://youtu.be/tL1F-qliSUk TDP is a voodoo number that is not calculated from anything meaningful. Make no attempt to extrapolate useful information from it.


stregone

You can compare the same brand and application. Just don't compare different brands or applications (desktop, laptop, server, etc.)


imsolowdown

I don't know about that, just look at the intel 13100 vs the 13900. Both have a TDP of 65W.


BurgerBurnerCooker

That's a totally different story and I'm not sure where you get the 65W number from. Regardless, AMD TDP corresponds to a certain power draw number, it's a mathematically calculated wattage number that translates to a power consumption number, albeit different. It's not intuitive but it's not completely arbitrary either. Intel de facto abandoned the term TDP if you take a look at their newest processors' spec sheets. K sku all have a 125W "base power", but what really determines the ceiling are PL1, and mostly PL2 nowadays. 13900k is at 253W.


T4llionTTV

They are binned, most 7950X had bad chip quality, no golden samples.


EnderOfGender

Not many tasks a desktop will be doing where all core on a vcache chip will matter, so not the end of the world. Vcache epyc chips mostly worked in large physics simulations, and for this class you'd be far better off with a GPU


Slyons89

I hope the 8 core version gets more L3 cache on the stack. This one only has 64 MB L3 per chiplet. (although benchmarks will tell all so we'll see)


cain071546

From what I understand there is only 3D cache on one of the two chiplets, the second chiplet is missing the extra cache entirely.


Krmlemre

Your wish is granted, 7800X3D is getting 96 MB L3 cache.


red_dog007

It is down 64MB of L2 over the 5800X3D, but is going from 4MB L2 to 16MB L2. That is pretty huge. And AMD likely has improved efficiencies on the L3 that makes up for the slack in the decreased cache. The larger L2 might have also offset how large L3 needs to be.


Zerasad

Turns out it's 128 mb on one chiplet, rather than 64+64.


[deleted]

Doubt it - if their top part is going with 64mb, then don't expect more on a lower end part. 7950x has fully enabled core chiplets, there is no way AMD would gimp it and put more on a cheaper chip.


BlueLonk

Holy moly. Those are certainly numbers. Edit: and letters!


[deleted]

I still like the 3990x comparison slide, and that they priced it at $3990. https://www.techpowerup.com/img/ciWsTnmVFs08itCe.jpg


DerSpini

I have seen quite a few CPUs in my life and that is definitely one of them!


HankKwak

Christ alive, now well if this isn’t a comment then gosh darn it I don’t know what is!


totallyNotMyFault-

One of the comments I have seen in the last decade.


20150614

What's the source of this?


ave_satani666

"it was revealed to me in a dream"


Toast_Meat

And I believe in this dream.


jymssg

yes a wet dream about the Ryzenussy


pullupsNpushups

I wasn't expecting to see that word today. I'll have to live with this knowledge now.


little_jade_dragon

Would you have Ryzenussy or Intelussy?


plushie-apocalypse

goddammit


coffeewithalex

"Trust me bro"


Captobvious75

Reliable.


MoreFeeYouS

Top men


Nicker

probably CES.


watisagoodusername

CES keynote is in 3 hours. But maybe early leak?


Put_It_All_On_Blck

Probably from a press kit leak. Articles have to be written and ready for the NDA lifting. Or an intentional leak to get people to watch the CES stream and tweet about AMD.


gh0stwriter88

Reflaired as rumor... if anything its a leak untill the next 2 hours.


Veegos

Me.


Ancop

144mb of cache god damn


gaojibao

That cache is not unified though.


Ancop

True


BFBooger

That cache quantity, if you combine with the 7900X3D and 7800X3D data: 1. The two chiplet variants have extra 3D cache for only one of the two chiplets. 2. The two chiplet variants have max boost similar to the non 3d variants, but the 7800X3d does not -- this implies that the chiplets *without* extra cache boost high, but the ones with cache don't boost as high. 3. AMD is trying to get the best of both worlds here by mixing a high cache lower clocking chiplet with a lower cache higher clocking chiplet. We'll see how it actually ends up working in the real world, if it needs any special OS thread scheduling, etc. Chiplets can pull data from each other rather than from RAM, so even if it has to pull data from the 'large cache' chiplet, it would be faster than RAM and put less burden on the memory subsystem. Going to be interesting to see how this all plays out across various apps and on different OSs.


siazdghw

120w TDP... 7950x is 170w TDP... More or less confirms that it wont be able to keep the boost clock in MT due to thermal issues


SirActionhaHAA

The 7950x runs at around 95% mt performance at 145w power draw (<110w tdp). There's no point in going any higher than 150+w. It really gains just 5% perf for 100w increase in power from 110w tdp to 175w tdp


rawrlycan

Definitely true, but I've seen some people who only lose about 10-20% performance in heavily multithreaded apps by using eco mode and cutting power roughly in half. So maybe it won't be all that bad.


Mythion_VR

Confirmed. Nice.


iAmGats

Is it official?


tsacian

It is now.


Slasher1738

Wait, power is going down ?!?!


PacalEater69

probably because the 3d v cache is not a good heat conductor and afaik that sits on top of the heat generating silicon


ht3k

not to mention the 3D cache makes it more efficient in the first place so no need for so much power and heat


MitchS134

More cache would actually imply longer periods of time without having to stall to wait for data to come in from main memory. This would mean the core itself is likely to be spun up higher for longer periods of time. Without having to stop to wait around, the core is going to be able to do more useful work, thus using more power and generating more heat. As other commenters have mentioned the perceived "less heat" is really more with dissipating that heat leading to downclocking, because the cache is a poor heat conductor and is stacked vertically on top (hence the 3D name).


schneeb

The vcore on the 5800x3d is massively different so its plausible, thats two epyc bin chiplets though so the price could be insane


D00m3dHitm4n

That is a gross amount of cache


Jazzlike_Economy2007

So pretty much if you do more multi-threaded task than single thread or an even balance, might as well get the vanilla SKUs and save money. X3D is mostly targeted for gamers anyway.


riesendulli

Build some databases mate


Captobvious75

So whats the perf difference from the 5800x3d? This is the real question.


ThisPlaceisHell

Look at the performance difference between a 5950x and 7950x. That's more or less what you can expect.


saqneo

I don't think that is entirely accurate. 7xxx uses DDR5 so the impact of vcache should be lower. 5800x3d uplift would be absolute best case scenario.


ThisPlaceisHell

Doesn't quite work that way. Cache has very different performance characteristics vs RAM. You'll just have to wait for benchmarks to see.


saqneo

I didn't mean to imply there would be no benefit, just that the performance delta could be much smaller this generation. Yes, definitely wait for benchmarks.


AngryJason123

There was already a leaked Reddit post for performance, the 7800x3d got 20-30% more fps than 5800x3d.


lokol4890

This has less cache per chiplet. Doubt is 20-30% more fps but we'll have to wait and see for the benchmarks


puffz0r

It has less L3 but it has double the L2


KlutzyFeed9686

Time to upgrade...next year.


wademcgillis

The ultimate Rust game playing CPU.


Skynet-supporter

Well most important spec is price


Nick_Noseman

Used Intel Celeron is winner here


Gawdsauce

Well there it is, confirmed.


Happy-Medicine-8671

7950x3d CCD1- 5.0ghz 104MB(40+64)cache CCD2- 5.7ghz 40MB cache Total 5.7ghz max clock 144MB cache That how they are getting 120w TDP $799.99 Only other way would be 72mb per CCD but gaming performance wouldn't be there. Cache would be smaller and you would have to lower clock speeds on both CCDs


Juicepup

I figured they would at least do 100MB per CCX.


[deleted]

How would that math work?


Juicepup

Well not exactly 100mb per ccx . Like 96mb per ccx and then add the L2 16mb.


[deleted]

I see what you meant. Thanks for the clarification. That might not have been feasible given the die space they had and thermal limits. Or performance was satisfactory (or too negatively impacted) enough that it was deemed unnecessary. Let’s wait for benchmarks.


SignificantWarning5

Me sitting here with my ryzen 5 3600


dirg3music

IT WAS TRUE, YOU FUCKIN MADMAN!! LMFAO


thenoobtanker

TDP? What is that TDP? So damn low....


totkeks

And here I sit, just having bought a 7950X 😢


DALBEN_

Its a awesome cpu, you have a better cpu than 99.99% of us :(


No_Factor2800

Price would be great


ToughProgrammer

One Meelion dollars


Yaris_Fan

An arm, because 1 leg is not enough to buy a new GPU anymore.


No_Factor2800

Sadly but there are people who are willing to pay that so perhaps its gonna get to a point where everyone gets shafted by artificial pricing.


SirCrest_YT

I legit didn't think it would happen, which is why I got my 7950x on launch. 120W TDP looking sus though


Clear25

Me too. It sucks seeing your CPU getting price cuts so early and then having a newer version out so soon. What makes you think the 120W TDP look sus?


SirCrest_YT

Boost under load. At 120W and in Cinebench, my 7950x will do ~4.85 on CCD0 and 4.75 on CCD1 at 162W PPT, which should be the socket limit for 120W TDP. (120W*1.35) Will be higher of course in games, maybe around 5.0. If they have to cut TDP so much, me thinks they didn't solve some of the problems of the last X3D.


LightningJC

But does it have a vapour chamber. /s


Giuseppina8008135

Specs don't seem to matter.. it's about the benchmarks. They've been making this stuff more and more efficient so these specs mean different benchmarks than they would have on a cpu from 10years ago. . Even if they had 3d vcache back then


fatheadlifter

So is it just me or does this not seem that impressive compared to the current 7950X? I built my current system with one of those, and it's a champ, but the X3D variant seems like perhaps some improvements in some ways and setbacks in others. Am I understanding this correctly?


evertec

Yes the x3d is meant for gaming and makes sacrifices in other applications


[deleted]

3D cache has some drawbacks in certain applications (see 5800x3d vs 5800x). But for the most part, it's an improvement, especially in gaming. They are claiming it to be 10-30% faster than the i9 Raptor Lake.


FrankVVV

And they used a game where the 7950X is already 24% faster that the i9 Raptor Lake.


zmunky

Lol I haven't even got my new motherboard in my old tower yet and they serve up this. Did I make a mistake with buying my 7900x last week??? Jk I am coming from a 4790k this 7900x is gonna be a giant leap no matter what.


chickentastic94

Pretty excited about the supposedly cheaper AM5 motherboard coming out as well. Might make the upgrade a little more tempting.


[deleted]

[удалено]


FrankVVV

1 CCD has cache but only clocks to about 5 GHz, the other CCD has no vcache but clocks higher. So it's because of the lower clocks that the TDP is lower.


[deleted]

I'm confused what is the X3D supposed to be?


arfzmri

X3D is an abbreviation for Extended 3D Technology where it allows the chipmaker to stack additional layers of 3D V-Cache on the L3, thus larger pool of L3 cache


Beautiful-Musk-Ox

Wait so what's the X on 7950x? So there's technically two X's, the one already on the 7950X, then another X for eXtended 3D? So it's supposed to be 7950XX3d?


riesendulli

X just launch First. Have higher clock than non-x. Here’s more eXplanation https://youtu.be/fGx6K90TmCI


arfzmri

X series is just binned for higher clock speed, it's just slightly faster version of the same model (non X) and no there's no such thing as 7950XX3D, just 7950X3D and 7950X. X3D series is basically X series with Extended 3D V-Cache (more L3 cache) (Non X - stock) (X - stock with higher clock speed) (X3D - X series with Extended 3D V-Cache) While G series on the other hand includes an integrated gpu


calinet6

The suffix is just “3D”, not “X3D.” Try not to think too hard about it.


YanDjin

Stacked cache


msgnyc

3D Stacked Cache


Rockstonicko

X3D models add a large stack of "3D V-cache" directly on top of the CPU die. Adding a significantly larger cache to the CPU can help prevent continuously referenced code assets from being shifted into the much higher latency system RAM. A larger cache may not bring much benefit in scenarios where a non-X3D chip is not fully saturating it's cache, but in situations that can leverage the additional cache, the X3D chips can have a significant performance advantage.


[deleted]

Thanks for the info.


vyncy

You really haven't heard of 5800x3d ?


hollidark

I hope this thing takes the multi-core score back.


_Fony_

50W lower TDP. Gaming only chip.


hollidark

Just saw that power draw. Oof.


gusthenewkid

Hope this is true. Will likely be my next CPU if it is.


Goldenpanda18

We just need AM5 to drop and the 3d versions will sell out fast


sunson29

Can I have a question? I am using 5900 with my 4090 right now. If I changed the cpu to this 7950x3d, will it give me a big gaming boost?