AMD Ryzen 7 9800X3D vs. Intel Core Ultra 9 285K: 45 Game Benchmark

Status
Not open for further replies.
Woah there TechSpot! It's not a completely useless gaming CPU. It's every-so-slightly better in that really good game Skull and Bones, all 120 people playing that might consider paying the extra $200 for the extra 3fps they might get out of it. /s

You would have to be pretty mad to buy this over the AMD if your purpose is gaming. That's pretty clear. Looking forward to the Intel boys showing up explaining we're all wrong though.
 
Dear OP, please upgrade to 1440p, I get that 1080p is great for ''taking the gpu out of the picture'' but even my 7 year old gaming box my daughter uses is 1440p and will never EVER see a 1080p display attached.
 
You’re doing benchmarks with a 4090 and can’t take 5min to toggle 4k and give us those benchmarks? I get the GPU does most the heavy lifting, but, it’s still a difference
 
Dear OP, please upgrade to 1440p, I get that 1080p is great for ''taking the gpu out of the picture'' but even my 7 year old gaming box my daughter uses is 1440p and will never EVER see a 1080p display attached.

The whole point of 1080p games testing for CPUs is that it limits the GPU's performance impact on the results - you can run into game engine limitations at high FPS though. If you start testing at 1440p or even 4k then you start running into performance limitations of the GPU which gives misleading results when you are looking at CPU performance. For example, looking at a random old benchmark at 4K in Cyberpunk 2077 with a 3090 there is 1 fps difference between the best performing CPU (3600XT Modified) and the 17th best CPU (3950X stock) - this literally tells you nothing about the CPU performance due to margin of error but everything about the GPU performance.

Or, in other words, if you want clear results showing you the performance difference between CPUs in games then you need to limit the impact of the GPU on the results. You do this by running at 1080p and hope that you don't run into limitations of the game engine.
 
The 9800x3d is an absolute beast. It's a tempting upgrade. Sadly there will be no AMD flagship to go with....
Dear OP, please upgrade to 1440p, I get that 1080p is great for ''taking the gpu out of the picture'' but even my 7 year old gaming box my daughter uses is 1440p and will never EVER see a 1080p display attached.
You clearly dont "get that 1080p is great for ''taking the gpu out of the picture''" if you felt the need to tell the author to upgrade to 1440p.

Learn how CPU benchmarks work please.
The whole point of 1080p games testing for CPUs is that it limits the GPU's performance impact on the results - you can run into game engine limitations at high FPS though. If you start testing at 1440p or even 4k then you start running into performance limitations of the GPU which gives misleading results when you are looking at CPU performance. For example, looking at a random old benchmark at 4K in Cyberpunk 2077 with a 3090 there is 1 fps difference between the best performing CPU (3600XT Modified) and the 17th best CPU (3950X stock) - this literally tells you nothing about the CPU performance due to margin of error but everything about the GPU performance.

Or, in other words, if you want clear results showing you the performance difference between CPUs in games then you need to limit the impact of the GPU on the results. You do this by running at 1080p and hope that you don't run into limitations of the game engine.
We really should have a questionnaire for comments, the question would be " do you think 1080p is a good resolution for CPU tests" and if they say anything other than "yes, you need to remove GPU limitations to push CPUs to their limit" they get banned from the comments on that story.

It’s embarrassing Intel couldn't even beat AMD’s last gen gaming crown.

I get it has good productivity performance but how is this CPU not $400 instead of over $600?
It's probably a cost thing. Arrow lake uses tiles, with multiple dies, and the CPU cores are on TSMC 3nm. So intel went from making their own to paying a middleman, wouldnt surprise me if the manufacturing costs are at least double that of raptor lake per chip. I imagine at $400 intel may actually lose $$$.
 
Tech power up did 1440&4k the difference between my 14700k and this was around 4 percent so I can guess the margins between all the top cpus. I think the titles should read 9800x3d new 1080p champ
 
Woah there TechSpot! It's not a completely useless gaming CPU. It's every-so-slightly better in that really good game Skull and Bones, all 120 people playing that might consider paying the extra $200 for the extra 3fps they might get out of it. /s

You would have to be pretty mad to buy this over the AMD if your purpose is gaming. That's pretty clear. Looking forward to the Intel boys showing up explaining we're all wrong though.
My purpose is gaming and I'd rather have the 285k.

I have a 9800x 3d btw.
 
My purpose is gaming and I'd rather have the 285k.

I have a 9800x 3d btw.
You know that just hurts to read...

Impressed how no Intel fan's turned up though, damn, AMD had fan's turn up even in their darkest days!

Either that or Intel fans are really holding onto "you should test at 4k" so they can drink as much copium as possible.
 
Last edited:
The whole point of 1080p games testing for CPUs is that it limits the GPU's performance impact on the results - you can run into game engine limitations at high FPS though. If you start testing at 1440p or even 4k then you start running into performance limitations of the GPU which gives misleading results when you are looking at CPU performance. For example, looking at a random old benchmark at 4K in Cyberpunk 2077 with a 3090 there is 1 fps difference between the best performing CPU (3600XT Modified) and the 17th best CPU (3950X stock) - this literally tells you nothing about the CPU performance due to margin of error but everything about the GPU performance.

Or, in other words, if you want clear results showing you the performance difference between CPUs in games then you need to limit the impact of the GPU on the results. You do this by running at 1080p and hope that you don't run into limitations of the game engine.

While I get that, I will never game at that rez, nor will my daughter .. so in that respect the results are meaningles.
 
There it is! I knew some Intel people had to arrive eventually, Thank you for your contribution. Glad to see all logic, reasoning and pure facts are out the window with this one.
Yup and there still proving that Da Nile is still more than a river in Egypt.
 
Dear OP, please upgrade to 1440p, I get that 1080p is great for ''taking the gpu out of the picture'' but even my 7 year old gaming box my daughter uses is 1440p and will never EVER see a 1080p display attached.
Please watch one of the THREE videos Steve has made on why testing is done at 1080p and not 1440p or 4K.
 
You know that just hurts to read...

Impressed how no Intel fan's turned up though, damn, AMD had fan's turn up even in their darkest days!

Either that or Intel fans are really holding onto "you should test at 4k" so they can drink as much copium as possible.
I've been doing a series of tests between 12900k and 9800x 3d, there are games where I have to drop to 1080p with dlss performance to actually register a difference between the two. Both are highly tuned (6400c28 memory for the 3d, 7000c32 for the intel). The only that I actually noticed a difference that matters is in MSFS. In everything else I'd rather have the MT performance of my 2021 intel chip over the x3d.

There are literally things I could do with my 2021 intel while gaming that are just impossible with the x3d due to the lack of cores.

The 980xx 3d is usually 20% faster than the 12900k (with both tuned) but the 4090 is as a huge bottleneck anyways that it's kinda irrelevant. Even the 5090 will be severely bottlenecking these chips. Again, when both are tuned. At stock the differences are much larger.
 
If I remember Gears 5 are Microsoft developed? Usually, they are well-optimized for every CPU so this may just be an update. The problem is that Intel misled its supporters. They said they are going to use V-Cache on their new CPUs. They did not.

Another thing. I would guess heavy strategy games would run better on SMT/HT systems than those without those features. This is a hard pill for Intel and their fans to swallow. Personally, I think Intel was not prepared for the performance uplift of the 9800x3d. Even still their new CPUs should have beaten the older 7800x3d. I will not argue when it comes to productivity. They are beasts but as mentioned. What about their new CPU coming next year early? Lets see how Intel try and counter AMDs new release. It is good for AMD but kinda boring when a fight is one-sided (in games)
 
I've been doing a series of tests between 12900k and 9800x 3d, there are games where I have to drop to 1080p with dlss performance to actually register a difference between the two.
Are you saying you're at 4k and using DLSS to render at 1080p, or you're output is 1080p, and using DLSS performance to drop all the way down to 960 x 540 rendering?
In everything else I'd rather have the MT performance of my 2021 intel chip over the x3d.
There are literally things I could do with my 2021 intel while gaming that are just impossible with the x3d due to the lack of cores.
See, what's strange about these statements, even in TechSpot's own review, the 9800X3D was practically the same in all the usual Productivity Benchmarks vs the 12900k, it was ever-so-slightly faster, or slower, than the 9800X3D depending on what you were doing, But nothing extreme, you wouldn't notice a difference according to most Benchmarks, it would be the same in most other tasks other than gaming, where it would be noticably quicker.

Out of curiosity, what things could you not do on the 9800X3D vs the 12900K? As someone who bought a 7950X3D because I mainly play games, transcode video and play with the odd game engine (recently UE5 and editing stuff an architect made) I've played with disabling the non-X3D CCD, disabling SMT etc... I was suprised how little most of the changes made (except for video transcoding, having all the cores and SMT on does make it quite a bit quicker).
Also, what is your definition of "literally could not do the same work", adding 3 seconds onto a compile? Just so we don't get mixed up between "cannot physically run a particular workload" and "it takes a few seconds longer to complete".
The 980xx 3d is usually 20% faster than the 12900k (with both tuned) but the 4090 is as a huge bottleneck anyways that it's kinda irrelevant. Even the 5090 will be severely bottlenecking these chips. Again, when both are tuned. At stock the differences are much larger.
Hey I'd much rather my GPU was the bottleneck than my CPU, in my experience anyway, when a CPU becomes the bottleneck, you get much more hitching, stuttering, just crazy frametimes all over the place, while when a GPU is maxed out, it's still a smooth experience, just free heating for whatever room you're in.
 
Are you saying you're at 4k and using DLSS to render at 1080p, or you're output is 1080p, and using DLSS performance to drop all the way down to 960 x 540 rendering?


See, what's strange about these statements, even in TechSpot's own review, the 9800X3D was practically the same in all the usual Productivity Benchmarks vs the 12900k, it was ever-so-slightly faster, or slower, than the 9800X3D depending on what you were doing, But nothing extreme, you wouldn't notice a difference according to most Benchmarks, it would be the same in most other tasks other than gaming, where it would be noticably quicker.

Out of curiosity, what things could you not do on the 9800X3D vs the 12900K? As someone who bought a 7950X3D because I mainly play games, transcode video and play with the odd game engine (recently UE5 and editing stuff an architect made) I've played with disabling the non-X3D CCD, disabling SMT etc... I was suprised how little most of the changes made (except for video transcoding, having all the cores and SMT on does make it quite a bit quicker).
Also, what is your definition of "literally could not do the same work", adding 3 seconds onto a compile? Just so we don't get mixed up between "cannot physically run a particular workload" and "it takes a few seconds longer to complete".

Hey I'd much rather my GPU was the bottleneck than my CPU, in my experience anyway, when a CPU becomes the bottleneck, you get much more hitching, stuttering, just crazy frametimes all over the place, while when a GPU is maxed out, it's still a smooth experience, just free heating for whatever room you're in.
1080p and using dlss dropping to whatever super low resolution I end up at.

For example, I play a lot of moded cyberpunk. But whenever a new patch comes out - it breaks mod compatibility. So what I do while waiting for the mod patches is download a not so legal copy of the game and have it install in the background, while playing cyberpunk. That's basically akin to unpacking a 100gb file. The 12900k handles that with 0 intervention, thread director sends the game to the pcores and the unpacking to the ecores. I don't do anything, I just play the game like normal. That's a definite no go on the 9800x 3d.

Now the way I see it the 9950x 3d will be the absolute goat, but if I had to choose between the 285k and the 9800x 3d the choice is obvious to me. Amd needs to move to 10 or 12 cores per ccd.
 
Last edited:
1080p and using dlss dropping to whatever super low resolution I end up at.

For example, I play a lot of moded cyberpunk. But whenever a new patch comes out - it breaks mod compatibility. So what I do while waiting for the mod patches is download a not so legal copy of the game and have it install in the background, while playing cyberpunk. That's basically akin to unpacking a 100gb file. The 12900k handles that with 0 intervention, thread director sends the game to the pcores and the unpacking to the ecores. I don't do anything, I just play the game like normal. That's a definite no go on the 9800x 3d.

Now the way I see it the 9950x 3d will be the absolute goat, but if I had to choose between the 285k and the 9800x 3d the choice is obvious to me. Amd needs to move to 10 or 12 cores per ccd.
So let me get this straight, you're going with a lifetime 20% (or more) reduction in gaming performance, because the Intel can unpack something at the same time as play a game? Also, the CPU has changed again, you were talking about a 12900K, now you're talking about the 285K?

I have a 5800X3D system behind me, before I actually go test this, are you telling me, If I try and play Cyberpunk, then try and download another copy of Cyberpunk at the same time, one of them will completely fail to load?
 
So let me get this straight, you're going with a lifetime 20% (or more) reduction in gaming performance, because the Intel can unpack something at the same time as play a game? Also, the CPU has changed again, you were talking about a 12900K, now you're talking about the 285K?

I have a 5800X3D system behind me, before I actually go test this, are you telling me, If I try and play Cyberpunk, then try and download another copy of Cyberpunk at the same time, one of them will completely fail to load?
The 285k is better than the 12900k so whatever the 12900k cna do, the 285k does it better.

I'm telling you if you try installing cyberpunk on the background while playing the game, yes, the 5800x 3d will completely fail.
 
Status
Not open for further replies.
Back