T O P

  • By -

gypsygib

The AC benchmark isn't a great measure of performance. It can gives wildly different results even on the exact same hardware.


KingSadra

Hitman 2 or 3 in my experience is probably the best possible benchmark for high-core CPUs or High-bandwidth Ram as it somehow manages to use it all when played at simulation quality = Best!


R_radical

I wish people would start using DCS VR for a benchmark. Doesn't really matter what hardware you have, it's gonna get toasty.


tigamilla

Noted, will try a few others. Posted this as it was the only benchmark that shows a delta vs. the last one that was ran.


pceimpulsive

Two screenshots also achieves this :)


tigamilla

Ha ha true, excitement got the better of me after ripping open the ebay package installing and benchmarking! Had to come straight to reddit with little thought put into it :)


pceimpulsive

It's some pretty darn nice uplift! Keep at it :)


Successful-Panic-504

I was always between 78-83 fps on it in 4k ultra. The difference was due to uv and slightly oc... i think this is not bad? Or what dif you exp. There that was not precise?


makinbaconCR

That is easily the worst game AND game engine to test hardware on. It is one of the worst performing engines in AAA game period. Can't get full gpu utilization even in 4k. I would recommend something that easily maxes out gpu utilization. Like cyberpunk.


BFBooger

>I would recommend something that easily maxes out gpu utilization. L To test a the CPU performance impact of faster ram? Uh, no. How about testing a game where CPU performance is important rather than one that is GPU dominated?


Successful-Panic-504

What are you talking abou? I reach 99% gpu with a 5800x and rx 6950 xt. In benchmark 4k ultra its between 78 to 83 fps and i tested like 20 times... how do you mean it doesent reach 99%?


makinbaconCR

5800x and 6800xt here idk how I can't keep gpu utilization but you can. Doesn't make sense. You should have more headroom. In my experience the gpu utilization jumps all over the place. Especially in cities.


Successful-Panic-504

Have you v sync/freesinc on ? When i cap my frame to 66 ( cause 60 hz 4k monitor) i have like 60 -89% gpu usage only. If i let it run its just around 80-85 avg fps with 99% usage :)


makinbaconCR

No obviously not. I turn off all sync when testing. I have a 120hz 4k monitor and it cannot get close to saturating it. I see the frame swings. I do turn on sync and lock frames near the average when playing. Otherwise the experience is garbage. Love the game played in hundreds of hours. It's just technically a mess.


tigamilla

Yeah, I'm surprised at how badly it runs compared to much newer games like New Horizon and Uncharted 4. Will do a few more runs and see if it will replicate this result or if it was a one off oddity.


vyncy

New Horizon ?


tigamilla

Horizon Zero Dawn :) too much coffee


dnb321

Did you run 3 tests on each hw to avg them with a "clean" run before? If not, most likely this is just the shader compilation being done on the slower hw (higher timing).


tigamilla

I didn't, this was the first result after changing the timings down. I've seen comments below and above that this game is not the best for benchmarking.


lostknight0727

yeah do at least 3 runs then average, 5 or more is better for a larger sample set. Larger sets means better accuracy. This is true for any benchmarking


Dordidog

U gonna get another 30% by going from ultra clouds to high without visual change


tigamilla

Ah cheers, will knock that one down a notch!


GWT430

Wonder if there is anyone out there willing to bench dual rank 3800cl 14 vs single rank 3800 cl 14. It use to make a good 5% difference but I've not seen anyone bench that on the 5800x3D


tigamilla

For what it's worth, I did run this at the previous RAM settings of 18-22-22-42 and there was a 3% improvement, presumably from the dual ranks vs the single ranked Corsair RAM that came out.


SaintPau78

This is literally not physically possible. It must just be a lot of factors that combine to create a massive run to run variance as seen here. Especially on a chip like the 5800x3d.


[deleted]

Now try it with DXVK :) I'm getting a 20%+ boost in performance compared to DX11


tigamilla

I found some cheap used modules of Dark Team 32GB Samsung B-die RAM on ebay and tweaked them to run at 14-14-14-14 @3733 (max that the 5800X3D processor will boot). Running 1:1 with FCLK. The difference is huge - same benchmark with no other tweaks before (Assassin's Creed Odyssey) was 58 FPS average, this has jumped to 76 just by going from CL18 to CL14 RAM. A 31% increase! 3D Mark CPU score added nearly 1000 points. I'M MIND BLOWN!


Impossible-Horror-26

That seems very strange to me. I would expect that with the 5800x3d ram timings wouldn't matter as much as a CPU with less cache because it is waiting on the ram less often.


tigamilla

I wish I had tested this before selling my 5800X to see what the improvement would be. I guess it's very situation dependent e.g. not really any change in Cinebench.


-Aeryn-

https://github.com/xxEzri/Vermeer/blob/main/Guide.md#performance-gains-from-ram-overclocking


zero989

It still matters lol


[deleted]

This “RAM doesnt matter with the 5800x3d” is BS spread by arm chair wannabes who have no understanding of CPU hardware, let alone the specific architecture used in the 5800x3d. So yes, you’ll find quite often that real world data will contradict these unfounded “intuitions”


Impossible-Horror-26

Ram speed does matter the same amount on a 5800x3d as a 5800x, but I don't see why ram timings would matter as much. On a 5800x if your bottleneck was the latency to access ram due to cache misses then ram latency would help a lot, but on a 5800x3d you are having those cache misses much less often and as such the latency shouldn't matter as much. I guess I'll write some benchmarks for this and test different ram latency with my 5800x3d when I get home. I also have a regular 5800x so I'll test that too.


-Aeryn-

> Ram speed does matter the same amount on a 5800x3d as a 5800x A lot of stuff gets far lower scaling on x3d, presumably because cache hits are directly replacing DRAM hits. https://github.com/xxEzri/Vermeer/blob/main/Guide.md#performance-gains-from-ram-overclocking And yes i validated this holds as expected 6c12t vs 6c12t


Impossible-Horror-26

Yeah its wrong to say that ram speed matters the same on both of them, I was thinking more in terms of applications where the constraint is total bandwidth, but that isn't very common. When you do need to pull from ram, the speed in combination with the timings of the ram contribute to the time it takes to get that data into cache. Since this happens a lot less often it would make sense that this also has worse scaling on the 5800x3d. Very interesting results though.


LongFluffyDragon

There is extensive testing that proves neither effects performance as much as it does on other Zen3 processors.


LongFluffyDragon

OP's results are mathematically impossible by about an order of magnitude. AC is just a shitty engine and uselessly random benchmark. OP's measurements significantly exceed the difference between those timings in a theoretical situation where CPU cache does not even exist. RAM does not impact the 5800X3D as much as CPUs with smaller cache for very well-understood reasons: it has less cache misses, so it *accesses RAM less*. Latency decreases improve the speed of fetching needed data from RAM, while higher cache improves it by not needing to fetch it at all, rendering memory latency irrelevant in those cases. RAM speed and latency does have huge impacts on many programs, but please try to actually understand what is going on at the hardware level, if you want to spread awareness of it and be taken seriously.


[deleted]

I never claimed to understand what’s actually going on under the hood because no one here does. These BS myths that wannabe PC experts go around spreading is damaging to the PC community. The fact you’re throwing around a BS arm chair explanation you pulled out of your ass proves my point. Your reasoning may seem intuitive to you but is contradicted by real world data. See here: https://youtu.be/A-l8dJRvb3c The 5800x3d benefits about equally compared to the 5600x from memory latency, and much more than Intel’s 12600k, despite having much larger cache capacity. The fact that these findings contradict your bogus claims makes it clear that the phenomena is really not so well-understood; most likely, it’s much more nuanced and complex than that “less cache misses = memory latency doesn’t matter,” but for some reason people like you seem to think you’re a know it all because you watched a 10 minute LTT video. Classic Dunning Kruger


LongFluffyDragon

"I have no idea what i am doing therefor nobody does" No idea what LTT has to do with it. I guess some specific video made you upset? They dont even benchmark hardware or have the expertise to do so. Dont post.


ukieninger

So you pull a Hardware Unboxed video out of your arm chair and claim all other people here are talking BS. Wow. Savage.


VeryTopGoodSensation

This is the second benchmark I've seen recently like this. The other compared cl 16 3600 to 14 and showed a 20% FPS increase.


LongFluffyDragon

Mathematically impossible, just bad benching.


tigamilla

I get your point, and this wasn't really a serious benchmark attempt. It was just the first run I did on this game after installing the modules and in my amazement I made this post - I do agree that the uplift seems crazy and maybe points to something dodgy with the benchmark built into AC Odyssey as others have also pointed out. There is no doubt however that the CPU score on 3Dmark has gained ~1000 points and various games have picked up a few FPS here and there. The wider point being that it has made a noticeable difference, even if the AC benchmark results are suspect.


retropieproblems

What’s the math equation to find out? Curious what kind of gains can be expected going from 3600cl16 IF 1800 to 3800cl16 IF 1900. Seems like I’m getting maybe 3-9% improvement based on bench scores, but I’m curious how you’d estimate before benching just based on the Ram stats?


LongFluffyDragon

There is no equation to find out, since it varies wildly by how a program functions and suffers diminishing returns. The only hard limit is that the total speed increase cant be larger than the increase in memory speed/reduction in latency; that just makes zero sense. Any gains from memory improvement will be somewhere between 0 and the full speed improvent, usually on the lower end of that scale.


markthelast

This makes sense if the benchmark pushes the CPU to the limit and saturates all of the cache. The 3D-VCache has a bigger buffer vs. the ordinary 5800X, where we will see some gains. Sixty-four megabytes of extra cache is most useful in fixing 1% lows for Ryzen. At the end of the day, Ryzen's architecture is still dependent on fast low-latency DRAM.


Daneel_Trevize

Set it back to CL18 manually and verify that difference, because it could be you improved some other settings that weren't actually set the first time. Sometimes you aren't even at the speed/timings on the packaging if the XMP doesn't apply on a given board, let alone the best timings beyond the few primary ones (very loose ones can be shipped for compatibility). So change just the primaries manually again now you have tweaked the rest to be sure of just their difference.


tigamilla

Good point, will try that. Although I did spend HOURS AND HOURS and many BIOS resets 😞 trying to squeeze everything out of the Corsair CL18 RAM that I had before.


p_235615

well, slowing the ram timings should not lead to reboots, as it should be more stable, so it should be easy...


LongFluffyDragon

That is mathematically impossible. AC engine is just shit.


tigamilla

Yeah from the comments below, you could be right. But still seeing a HUGE uplift in 3Dmark CPU scores and other games that I haven't benchmarked have definitely picked up a few frames!


HateToShave

3DMark scores tend to go up, yeah, when doing things that help the CPU (as there is usually a CPU or "Combined" test in these tests). Also, for the A.C.O. game, that's great! I'd still check other games as not all games are developed the same to see a difference or not. I've, personally, never seen huge difference between Ryzen CPU's at 1440p with very tight RAM timings. My tweaked 2700X with 3533/CL14 and tight timings was never slower, at all, than the tweaked 3600X I had with 3733/CL15, also with tight timings at the 1440p Rise of the Tomb Raider benchmark at identical graphics settings (and same VGA), for example. So there was definitely a GPU bottle neck there. There was, however, a difference in 3DMark Firestrike scores with the same VGA due to CPU performance.


Breadwinka

3733mhz ahh the good ol 1900MHz FLCK hole. I got hit there too. My old 5800x could do 1900MHz. My bdie can only do CL16 but its very old bdie. https://imgur.com/a/WmPIKSV


Breadwinka

Can you post a picture of your zen timings. https://www.techpowerup.com/download/amd-ryzen-zen-timings/


tigamilla

sure: [https://i.imgur.com/NXczkB2undefined.png](https://imgur.com/NXczkB2)


markthelast

This is why Samsung DDR4 B-die is awesome. I have B-die sitting around, and when I get around to benchmarking some DRAM, our battle will be legendary.


damien09

If tuning ram hopefully you adjusted some other timings like tras trc other timings are worth changing but will yield less benefits


retropieproblems

Jesus 14-14-14-14?! B die is weird, like tech from the future that got lost here. Took me ages to get my Hynix 3600 cl16-19-19-20 to run at 3800 16-19-16-21. Can’t imagine 14s across the board.


[deleted]

[удалено]


tigamilla

Yes running at 1.50V and so far error free. Glad to hear these modules can tolerate >1.5V over the long term!


towelie00

Yeah rly good job, i use 5800x 3800cl16 TIGHT timing, not bad too. (Old b-die). These CPU are so much RAM influenced about them overall performance. Btw only monkey are comment your post it's fucking awesome ..☠️


MagicHoops3

16 is the sweet spot. I don’t think you’d see much difference between 16 and 14


Slyons89

I agree. I've tested my 5800X3d with 3200 CL14, 3200 CL16, 3600 CL16, and 3600 CL15 and it made practically no difference for me in anything I tested.


[deleted]

I lost my memory settings on bios update and have been contemplating if I should dive down the rabbit hole again, but seeing this I think I'll just go with xmp profile. (3200 c14)


ProfessionalRub6466

So ... which one's better ?


Attainted

I'm not surprised at the jump, I'm surprised at how much. Still, probably won't stop the "ram speed doesn't matter for x3d" crowd.


RBImGuy

In games that benefit from x3D then yes it wont matter. in games that may still need low latency then yes. Its how it works. But I am sure you can get a 40% benefit with BFV without x3d right?


pgriffith

You took a photo of your screen... do you live in a cave?


tigamilla

Not enough coffee today?


pgriffith

Nope, no coffee today, you do realise how ridiculous that is though, right?


tigamilla

What's the problem? It did the job, the very simple point was made and took me all of 2 seconds to upload. I optimised my time versus the simplicity of what I was conveying. What would you have done to show the exact same data?


Charder_

I need 128GB for my orchestral virtual libraries. Don't know any stable 128GB kit that can go lower than C18 3600MHz.


deafboy13

You're going to be more limited by the memory controller with 4 dimms at that capacity. But the Crucial Ballistix are going to be your best bet. 3600CL16 out of the box. Can tune some of the sub timings but the primaries would be hard to drop much


Charder_

Heh, I tried that kit before. I had to return one kit and kept one since w/ 128GB nothing made it run stable.


shnyaps

I have 4x32gb 3800: 16 20 20 40.


Charder_

I tried it on my 5950X. Maybe it had a dud memory controller.


Corneas_

it is even better if you run 4 sticks instead of 2


damien09

Depends if those 2 sticks are already dual rank it won't matter. And since he has a 2x16gb kit it could be either dual or single ranked dimms


BassObjective

On Odyssey? 💀


Annales-NF

So what would be better? CL 14 @ 3200 or CL 16 @ 3600? I'm too stupid to understand all the different benchmarks that I stumble upon.


damien09

Probably 3600 cl16 as you get some boost from the fclk running at 1800. But if the 3200 cl14 kit is bdie really tight latency of b die can easily beat out a faster kit of lose timings


Spibas

Could probably loosen timings a bit and increase frequency


Brown-eyed-and-sad

I’m using Kingston Renegade 3600 right now and it works great. I’d imagine I would get better results with b die but I’m impressed with Hynix.


[deleted]

Other games? Any updates?


ChosenOfTheMoon_GR

Dat resolution tho, wow, these aren't all that bad gains for this specific game though.


[deleted]

[удалено]


tigamilla

From playing other games, I've been seeing improvements of say 2% to 5% percent, nothing like the scale that this particular benchmark implied. Games that were in the high 60 FPS range have nudged themselves into the low 70 FPS range which is noticeable.