AMD Ryzen 7 5800X3D vs. 9700X: 3D V-Cache Still Going Hard!

Fresh install…? Pff thats lot of work, sure that is needed. I have win 11 for a year now.
Maybe test between those 2? Fresh instal vs update…:) ;)
 
Dear Lord, yet another pointless 4090@1080p benchmark. This is so silly :)

But no doubt there will be a lot of sage pontificating in the comments, as if these results had some meanigful bearing on real life scenarios.

Also, saying that 5800X3D wasn't an expensive CPU is an icing on the cake. And 5xxx X family still remains a better proposition for somebody on a tight budget.
 
Strange article.. given you can still buy the 5700X3D quite readily, why wouldn't you compare that rather than the 5800X3D which is sold out nearly everywhere?

And given the 5700X3D has a 30% discount right now on Amazon... You'd think they'd want to get some extra click-through commissions?
 
Only some games can take advantage of the extra L3 cache, but when it does it's magic.
Also the 1% low benefit a lot when CPU bound. But as people here say it was not cheap at launch 450 Euro and not cheap now, In fact you can't find a separate 5800X3D where I live, only bundles are available now.
Sure it was close to 316 Euro sometime back, but for a short period.

From my point of view each GPU should be tested with a fresh install of the OS, cloning is easy now. Specialy when changing Nvidia drivers with AMD. No matter what DDU or other software you use, Windows is a garbage at registry level and system files.

I just saw it available at only one shop...for 500 Euro...damn.

L.E. the 7800X3d is cheaper, the tray version (no box) is 400 Euro.
 
5700X3D is good value (5800X3D is EOL and only 5% faster) but it won't come close to a 7800X3D or even a higher end Raptor Lake chip in most games. Also some games scale very well with DDR5.

Can't wait for 9800X3D. First 3D chip that won't get gimped on clockspeeds. Leaks look very promising. Like 20% improved ST and 30% improved MT perf over 7800X3D in Cinebench. That is impressive.

9800X3D is going to the the ultimate gaming chip, beating 9900X3D and 9950X3D too, due to being single CCD. Single CCD is always prefered for gaming.

3D cache is way less sensitive with 9000X3D in general. Clockspeeds will be way higher this time.

Might even replace my 7800X3D with a 9800X3D as I am CPU bound in many games (360 Hz user, might be going 480 Hz in the next year)
 
Last edited:
Given that you've had to resort to a 4090 at 1080p to show a difference between the two - I think it's pretty clear cut why most gamers haven't bothered to upgrade to AM5 and Ryzen 9XXX yet.

Better to save your $$$ and put it towards the next expensive GPU.
 
They have to do the CPU testing with a 4090 in 1080p or even lower. Like every other benchmarking site. Otherwise the charts would be very boring for the readers.

In 4k I.e., one would think of the 9700x and 5800x3D as the same, almost to the point identical in gaming performance. The same goes with a GPU like a 3060 12GB or RX 6600 8GB, which seems more proportional to 1080p. So a CPU review needs to artificially create a CPU bound situation, and the 4090 with it's power is a godsend for that, even if it doesn't reesemle a real life scenario for a 4090.

So if you want to have some benchmarks with differences to talk about, you need to tolerate that. If you game at 4k or even 1440p, and or if you use a GPU <4090, like a 7800xt or 4070 or even slower ones, differences between CPUs will be smaller, of course depending in the games. Because then the situation changes to a GPU bound real world scenario, which is where most of us are or want to be in reality, and then even a slower CPU seems to perform on par with faster ones. But those reviews are telling us something important, nonetheless: How those CPUs will presumably age with new hardware (GPUs) and new software (games).

For example: Do prepare for lots and lots of 'absurd' 1080p reviews with the even mightier 5090, once it launches. Seems absurd, okay. But it's logical. Because you need the fastest GPU and lowest resolutions to clearly show the differences in CPU performance in this context. And a 5090, and that's the point, will show greater differences of the CPUs than todays flagship, the 4090. Those reviews will provide us with a better view on what type of silicon will age better with the faster GPUs and more demanding games of the future.
 
Last edited:
They have to do the CPU testing with a 4090 in 1080p or even lower. Like every other benchmarking site. Otherwise the charts would be very boring for the readers.
There's no point explaining it, comments that do not understand basic testing methodology, will never understand it.

Even funnier, when they do get given the benchmarks they ask for, they complain it doesn't really give them any information or simply say "see! Told you my 2500K is still perfectly fine to game at 8k".

So it's just trolling at this point, better to just ignore them or block them so you don't have to see there rubbish.
 
Aaand I'm still using that 2021 5800X due to this site's own comparison benchmarks showing little consistent uplift or coverage thereof at 4K (and me using 3440x1440 in the office and 4K in the lounge far less often) Just wasn't worth the £300 two years ago and not the same amount rn when I'd be running 2-3x the price eventually on a later 8c/16t X3D plus the necessary new mobo/RAM. As it is I'm waiting to see how the upcoming X3D's step up before even starting to make plans.

As an aside, when the question of better gaming fps came up it turned out to be better for consistent uplift and coverage to grab a 7900XTX, replacing a 6800XT which went to upgrade my gf's 3070 at 3440x1440 (so a two birds with one stone deal) for not much more than that 7800X3D and mobo/RAM to match the current anyway. And I'd still probably only be making use of some 50-60% of that CPU just like the 5800X too. Some of my games are notably CPU heavy (as well as often not appearing on console, which is a primary reason I go PC) but also generally sit in genres that don't require higher fps rates to remain enjoyable so it evens out.

Tbh I've been running to the 'higher end' for two gens now and as much as I appreciate the end result it can get exhausting. In future I think I'll just keep any plans to simpler questions answered and solutions delivered.
 
For example: Do prepare for lots and lots of 'absurd' 1080p reviews with the even mightier 5090, once it launches. Seems absurd, okay. But it's logical. Because you need the fastest GPU and lowest resolutions to clearly show the differences in CPU performance in this context. And a 5090, and that's the point, will show greater differences of the CPUs than todays flagship, the 4090. Those reviews will provide us with a better view on what type of silicon will age better with the faster GPUs and more demanding games of the future.
Or then not. Zen5 is first consumer class CPU to REALLY have good AVX512 implementation. How much future games will use AVX512? Oh, we don't know yet. Then what these 1080p benchmarks tell about how what CPU will be better on future? Basically nothing. AVX512 is about only technology today outside very large caches that can give high double digit performance boosts on games. Predicting how AVX512 will be implemented on future games is nearly impossible and so any "long term prediction" benchmarks are totally useless.
 
Test that always missing on X3D's reviews is 3D simulation. I'd like to know if it's actually an improvement compare to non 3D V-cache in that field I've read somewhere that it actually gives some improvement, but since they didn't provide a more detailed data, I can't exactly trust them.
 
I want to see online games in the testing such as WoW, FF XIV, New World, Throne and Liberty, etc. I heard that for MMORPG 3D V-cache has a very good uplift compared to regular games.
 
Then what these 1080p benchmarks tell about how what CPU will be better on future? Basically nothing.
I don't think so. History tells us otherwise, I.e. in all those "CPU xy revisited" articles. CPUs age differently. Some of those revisits are even tested with the 4090. There are a couple of revisited articles and graphs from Steve here at Techspot.

Subtle CPU differences, even in 1080p, tend to grow over time with modern games and faster GPUs, and the low 1% and 0.1% FPS are usually showing this very good, also in higher resolutions.

That said, you should (in context of better aging) start betting on singlecore and multicore scaling, on cache vs frequency, on hyperthreading vs physical cores, on rentable units vs orthodox core management and so on.

I strongly doubt AVX512 will get important in gaming (the consoles don't have it, Intel ditched it, we saw no game developers jumping on the older AVX128 and AVX256 trains, etc.). But it's a great boost for scientific apps. Maybe for hyperealistic physical calculations in games (but chances are, those will be done on the GPU level).
 
Last edited:
I don't think so. History tells us otherwise, I.e. in all those "CPU xy revisited" articles. CPUs age differently. Some of those revisits are even tested with the 4090. There are a couple of revisited articles and graphs from Steve here at Techspot.

Subtle CPU differences, even in 1080p, tend to grow over time with modern games and faster GPUs, and the low 1% and 0.1% FPS are usually showing this very good, also in higher resolutions.

That said, you should (in context of better aging) start betting on singlecore and multicore scaling, on cache vs frequency, on hyperthreading vs physical cores, on rentable units vs orthodox core management and so on.
CPUs age differently, that's true. However when there is something that could offer massive improvement, like AVX512, it's pretty much impossible to tell how much it will impact on future.
I strongly doubt AVX512 will get important in gaming (the consoles don't have it, Intel ditched it, we saw no game developers jumping on the older AVX128 and AVX256 trains, etc.). But it's a great boost for scientific apps. Maybe for hyperealistic physical calculations in games (but chances are, those will be done on the GPU level).
Even Cinebench ditched AVX512 because Intel used it so badly. However Zen5 proves that AVX512 is excellent if CPU architecture is made right. And Zen5 is.

Reason why AVX512 Should get more support is reason that it takes literally few seconds to add support for it. In other words, if you can get double digit performance increase with few seconds, only morons leave that on table. And that is situation with Zen5. And that is why I say Cinebench developers are *****s. And most current software is trash. And Zen5 is excellent CPU, but benchmarks suck. Again, double digit performance increase takes few seconds work. Why not get it?
 
The thing I like most about new high-end gear is that it makes the previous generation kit that much more affordable for non-gamers.
 
Zen5 is excellent CPU, but benchmarks suck. Again, double digit performance increase takes few seconds work. Why not get it?
That's true. Intel has too much market share and applies much of it's power on vendors, software and benchmark developers to go their way.

Glimpse of the other way: Phoronix showed the gains of AVX512 and other technologies on the new Ryzen 9000 series, when scientific and performance apps can scale freely with new CPU features (on Linux of course): Phoronix 9950x review
 
Dear Lord, yet another pointless 4090@1080p benchmark. This is so silly :)

But no doubt there will be a lot of sage pontificating in the comments, as if these results had some meanigful bearing on real life scenarios.

Also, saying that 5800X3D wasn't an expensive CPU is an icing on the cake. And 5xxx X family still remains a better proposition for somebody on a tight budget.
Hey look, another fool who doesn't understand what a CPU bottleneck is!

Where are your 5800x3d vs 9700x tests at 4k? We're waiting to see them, oh great sage!
CPUs age differently, that's true. However when there is something that could offer massive improvement, like AVX512, it's pretty much impossible to tell how much it will impact on future.

Even Cinebench ditched AVX512 because Intel used it so badly. However Zen5 proves that AVX512 is excellent if CPU architecture is made right. And Zen5 is.

Reason why AVX512 Should get more support is reason that it takes literally few seconds to add support for it. In other words, if you can get double digit performance increase with few seconds, only morons leave that on table. And that is situation with Zen5. And that is why I say Cinebench developers are *****s. And most current software is trash. And Zen5 is excellent CPU, but benchmarks suck. Again, double digit performance increase takes few seconds work. Why not get it?
You use the word "could" a lot. 512 Could change a lot. Or it could do nothing in the long run. Or Intel could release a faster CPU, or.....

There is always something new on the horizon. If you look at every review with " well they should have waited because this thing Could change" you'll read a ton of reviews and never buy anything.
 
You use the word "could" a lot. 512 Could change a lot. Or it could do nothing in the long run. Or Intel could release a faster CPU, or.....

There is always something new on the horizon. If you look at every review with " well they should have waited because this thing Could change" you'll read a ton of reviews and never buy anything.

AVX512 is not something "on horizon", it has been around nearly a decade already. Also taking advantage of it does not require years of development or anything. It takes few seconds from developer.

I just change word "could" for just "can". And now were are at totally different situation.
 
All Major companies, has one problem. They are their own worst enemies. We have seen this a lot. 2 Examples is, Nvidia GTX 1080 for $599 and the 1070 TI $449. When you OC the 1070 TI, you could match the 1080. AMD Releasing the 2600 at $199 and the 1600 AF think it was $85 MSRP on launch (doubt it was sold for that much) The only thing that the 2600 had was depends on luck, I could do an OC to 4.2ghz think on 1.375v can't remember. But there was a massive price difference for almost the same performance.
 
Back