Are you saying you're at 4k and using DLSS to render at 1080p, or you're output is 1080p, and using DLSS performance to drop all the way down to 960 x 540 rendering?
See, what's strange about these statements, even in TechSpot's own
review, the 9800X3D was practically the same in all the usual Productivity Benchmarks vs the 12900k, it was ever-so-slightly faster, or slower, than the 9800X3D depending on what you were doing, But nothing extreme, you wouldn't notice a difference according to most Benchmarks, it would be the same in most other tasks other than gaming, where it would be noticably quicker.
Out of curiosity, what things could you not do on the 9800X3D vs the 12900K? As someone who bought a 7950X3D because I mainly play games, transcode video and play with the odd game engine (recently UE5 and editing stuff an architect made) I've played with disabling the non-X3D CCD, disabling SMT etc... I was suprised how little most of the changes made (except for video transcoding, having all the cores and SMT on does make it quite a bit quicker).
Also, what is your definition of "literally could not do the same work", adding 3 seconds onto a compile? Just so we don't get mixed up between "cannot physically run a particular workload" and "it takes a few seconds longer to complete".
Hey I'd much rather my GPU was the bottleneck than my CPU, in my experience anyway, when a CPU becomes the bottleneck, you get much more hitching, stuttering, just crazy frametimes all over the place, while when a GPU is maxed out, it's still a smooth experience, just free heating for whatever room you're in.