A 12GB Nvidia RTX 5070 could debut next January alongside the RTX 5080 and RTX 5090

Daniel Sims

Posts: 1,672   +47
Staff
Highly anticipated: Rumors have long suggested that Nvidia will unveil two top-ranked graphics cards in its upcoming RTX 5000 lineup at CES 2025 in January. New reports indicate that a mid-range member of the lineup will also appear at the event, but some of the predicted hardware specifications might disappoint onlookers.

According to established leaker @harukaze5719 and other sources, the long-rumored GeForce RTX 5080 and 5090 will be accompanied at CES 2025 by the 5070. Hardware specifications for the three graphics cards have recently begun materializing.

Information on the 5070 was previously scarce, but it is expected to feature 12GB of VRAM, likely disappointing prospective customers annoyed with Nvidia's stingy VRAM allocation. Although 12GB is the same amount as the upper mid-range GPU's predecessor, the 4070, the new card will upgrade to much faster 28Gbps GDDR7 memory with a 192-bit bus.

The 5070's CUDA core count remains unclear, but its base GB205 processor enables a maximum of 6,400. The TBP is 250W, a slight increase from the 4070's 200W.

Prior reports suggested the RTX 5080 also has less VRAM than many hoped for – 16GB. However, a spec sheet citing 10,752 CUDA cores, a 400W TBP, and 32Gbps GDDR7 RAM support earlier estimations that it could outperform the current flagship, the RTX 4090. Furthermore, leaked shipping manifests indicate that a 24GB 5080 might eventually appear.

Meanwhile, the new top card, the RTX 5090, will be a 32GB monster with a 512-bit bus, 21,760 cores (slightly cut down from the GB202's 24,576), and a 600W TBP. The dramatic wattage increase may look scary, but it might not represent real-world gaming workloads. The three rumored RTX 5000 GPUs will feature single-slot 12V-2x6 power connectors, support PCIe 5.0, and include DisplayPort 2.1a.

Nvidia hasn't commented on rumors regarding its upcoming products, but the company recently confirmed that CEO Jensen Huang will give an opening keynote at CES. Huang hasn't presented at CES since 2019, indicating that a big announcement is planned for the 2025 event.

Although Nvidia isn't expected to face competition in the top GPU weight classes going into the next generation, AMD and Intel are preparing mid-range cards that could offer performance similar to the 5070. Team Red's upcoming RDNA 4 series, likely to be called Radeon RX 8000, could appear at CES. Meanwhile, Intel's Arc Battlemage GPUs remain on schedule for a holiday 2024 launch.

Permalink to story:

 
Plenty of room for 5070 Ti and 5080 Ti then.

I just hope AMD can compete with 5060/5070 series this time.
Hope to see FSR 4 launching with Radeon 8000 next year + Improved RT performance.
 
5070 and 5080 is kind of underwhelming however I guess thats what happens with no competition! They will still sell like hotcakes.
 
We can thank AMD for not competing.
AMD has been EXTREMELY competitive in the low end and mid range. Dollar per dollar, AMD beats Nvidia at nearly every price point in both raster and Ray tracing.

It doesn't matter what AMD does, nVidia fanbois like yourself will still buy their product. Now I have an nVidia stalker(you) who can't help but bring up how everything is AMDs fault. Hurricane Helene? AMD. The housing market? AMD? That pigeon that **** on your car? AMDs fault.
 
We can thank AMD for not competing.
Consumers don't buy AMD even when they're competitive. AMD last had an undisputed lead during the era of the GTX 480 Fermi cards, and only achieved 40% of sales during the period.

People only want AMD to compete so they can buy cheaper nvidia cards, and at this point nvidia's data center sales generate so much more profit their position is unassailable.
 
5070 and 5080 is kind of underwhelming however I guess thats what happens with no competition! They will still sell like hotcakes.
While I agree with you on paper nVidia are claiming the 5080 will be 10 percent more powerful than a 4090 so not so shabby - but we've heard these sorts of claims before and been disappointed so we'll see.
12GB on a 5070 seems a little anaemic too - we had 12 on 3060's years ago.
 
While I agree with you on paper nVidia are claiming the 5080 will be 10 percent more powerful than a 4090 so not so shabby - but we've heard these sorts of claims before and been disappointed so we'll see.
12GB on a 5070 seems a little anaemic too - we had 12 on 3060's years ago.
3060 12GB was beat by 3060 Ti 8GB in tons of games and still are. VRAM don't really matter when GPU is weak. 3060 had a weak GPU, 5070 won't have.

There is no way that 5080 is going to beat 4090 by 10% looking at those specs. It will slot in between 4080 and 4090.
 
My guess is that the 5080 will perform almost exactly as well as the 4090D, in order to comply with export restrictions to China.
 
AMD has been EXTREMELY competitive in the low end and mid range. Dollar per dollar, AMD beats Nvidia at nearly every price point in both raster and Ray tracing.
AMD have not been competitive at all outside of rasterization performance only and even here, they loose in most newer games.

Most people buys a GPU looking at the entire featureset, not just rasterization performance.

Also, AMD drivers are worse than Nvidias. Games simply run worse overall, where Nvidia provides a solid good experience across the board.

 
12GB for a $600 card was rude in 2023, it seems outrageous in 2025.

JEDEC were talking about 3GB GDDR7 modules in 2025. I'm not certain how they are addressed, but it seems possible if you have say a 192 bit controller like the 4070 and inevitable 5070 you could theoretically have an 18GB model. 6 x 3GB modules.

That sounds almost ideal.
 
AMD have not been competitive at all outside of rasterization performance only and even here, they loose in most newer games.

Most people buys a GPU looking at the entire featureset, not just rasterization performance.

Also, AMD drivers are worse than Nvidias. Games simply run worse overall, where Nvidia provides a solid good experience across the board.

I'm talking about price. You can get a ton more performance for every dollar spent going with AMD. unless you ABSOLUTELY need a 4090, Performance per dollar, AMD wins everytime. The 4080 super was a good buy for awhile when it was hovering around $900, but they are creaping up to $1200-1300 in some instances so just get a 4090 at that point. The 4070 was a decent buy for awhile, but with the 7900XT readily availble at $650 and the 4070 having price creap going into the $700s in some instances, 7900XT is the obvious buy. the 4070TI only made sense for awhile, but even that is pushing past the $900 mark these days. The 7900xtx is the unsung workstation bargin because of it's memory buffer and vram. They are actually being boughtup enmasse by people who want to build an AI compute cluster on a budget. Back when the H100 was pushing $60,000US the 7900XTX was literally being bought up by the pallet by AI startups. You could seriously get a pallet of 7900XTX's for the cost of 1 H100.

Why does AMD win in all of these categories? Because noone is buying their cards and they're sitting on shelves and AMD is offering step discounts to move product. There is no logical reason for it, either. FSR is nearly on par with DLSS now and hardly anyone uses the nVidia feature set outside of 4090 owners. Why is that? well, most people who can't afford 4090s also can't afford gaming monitors that take advantage of of their software suite.


The driver issues thing is a lie, this isn't 2015.

From your article
The issue is rare and only affects a subset of systems that had a PC update occur during the installation of an Adrenalin driver.
This was caused by Windows forcing a system update while you were updating your drivers. Frankly, I blame this on M$. Anyway, it is a non-issue anymore. Doing this created a problem in the Boot order when windows was loading. It could technically happen with nVidia drivers, too. But again, everything is AMD's fault.

I'm tired of sounding like an AMD appologist, but there isn't anything to appologize for at this point. Frankly, what I'm tired of is FanBois talking about how great it is to be bent over a table and pounded by nVidia. Going from 10 series/Vega days where nVidia was objectively better in every way, they've gone from a good product to a fashion statement. In the last 5 years AMD GPUs have gone from being able to find an issue under every rock you look to people struggling to make a link between Windows update bricking systems that just happen to have AMD GPUs.
 
That pigeon that **** on your car? AMDs fault.

Technically, that's the poo of a Theropod dinosaur.

OT, the Leatherman is going to skin the whales & the nvidia fanbois extra gud this gen. The tears of the fanbois on twitter are extra salty this time IMO.
 
3060 12GB was beat by 3060 Ti 8GB in tons of games and still are. VRAM don't really matter when GPU is weak. 3060 had a weak GPU, 5070 won't have.

There is no way that 5080 is going to beat 4090 by 10% looking at those specs. It will slot in between 4080 and 4090.

I found it interesting that the power usage between the 3060 and 3060Ti were nearly identical. The 3060 was not very efficient for the performance that it gave users. However, it was a pretty decent lower-mid gaming card for 1080p. Many folks argued that the 12GB 3060 was better because MOAR RAM! However, the 3060Ti was the better card - sure, it had a little less RAM, but in the end it had better performance and used the same amount of power as the 3060 did.

I've got a 3060 and a 3060Ti in two different computers at home. I'd take the 3060Ti every time over the 3060 for the games I play.

As for the 50xx series coming out from Nvidia....when you have the 4070 only being as good as the RTX 3080 (on average), I don't have much hope for the 5070 being any better than the 4080. My guess is it'll come in just under it and still be on the sad side of VRAM offering because it'll only have 12GB. Price for it, based on 40xx series, I'd say around $600-650.
The 5080 will come out roughly 30% faster than the 4070 and price at the $1000-1100 mark.

That puts both those cards out of my comfort zone for purchasing because 1) the 5080 will be priced way too high for me and the 5070 won't offer enough performance gains over the 3080Ti that I have in my gaming system to warrant me dropping around $650.

My hopes is that AMD comes out with something that hits that similar performance range of the 5070 (hopefully higher) and comes priced more around $500. That would be something I'd be very interested in getting, especially if the power draw is significantly less than what my 3080Ti requires.
 
I found it interesting that the power usage between the 3060 and 3060Ti were nearly identical. The 3060 was not very efficient for the performance that it gave users. However, it was a pretty decent lower-mid gaming card for 1080p. Many folks argued that the 12GB 3060 was better because MOAR RAM! However, the 3060Ti was the better card - sure, it had a little less RAM, but in the end it had better performance and used the same amount of power as the 3060 did.

I've got a 3060 and a 3060Ti in two different computers at home. I'd take the 3060Ti every time over the 3060 for the games I play.

As for the 50xx series coming out from Nvidia....when you have the 4070 only being as good as the RTX 3080 (on average), I don't have much hope for the 5070 being any better than the 4080. My guess is it'll come in just under it and still be on the sad side of VRAM offering because it'll only have 12GB. Price for it, based on 40xx series, I'd say around $600-650.
The 5080 will come out roughly 30% faster than the 4070 and price at the $1000-1100 mark.

That puts both those cards out of my comfort zone for purchasing because 1) the 5080 will be priced way too high for me and the 5070 won't offer enough performance gains over the 3080Ti that I have in my gaming system to warrant me dropping around $650.

My hopes is that AMD comes out with something that hits that similar performance range of the 5070 (hopefully higher) and comes priced more around $500. That would be something I'd be very interested in getting, especially if the power draw is significantly less than what my 3080Ti requires.
well a large part of the performance is the Memory bus and we end up seeing specific numbers based around the bus size. The 3060 was going to either have 6GB or 12GB of ram. It couldn't use 12GB of VRAM but 6GB of VRAM was too little so they ended up having to put 4GB modules on it instead of 2GB IIRC.
 
Imo this is a lost opportunity for AMD on not pushing for better texture quality in games to fill the 16 gig vram buffer.
the 5070 Will be competing with the rumored rdna4 flagship. Blackwell will test the loyalty of Nvidia's customers.
 
12GB for a $600 card was rude in 2023, it seems outrageous in 2025.

JEDEC were talking about 3GB GDDR7 modules in 2025. I'm not certain how they are addressed, but it seems possible if you have say a 192 bit controller like the 4070 and inevitable 5070 you could theoretically have an 18GB model. 6 x 3GB modules.

That sounds almost ideal.
You are absolutely right. I love a Nvidia bad circlejerk (which is very deserved lately) as much as the next guy, but people are missing the blatantly obvious fact that Nvidia is just waiting for 3 GB modules to become available next year, which will mean 12 GB on a 128-bit bus (5060), 18 GB on a 192-bit bus (5070), and 24 GB on a 256-bit bus (5080).

AMD will most likely do the exact same thing on their RDNA 4 cards, keep the 128-bit bus and 192-bit bus on their 600 and 700 tier cards respectively and launch 12 GB/18 GB versions with the 3 GB modules once available.
 
VRAM don't really matter when GPU is weak. 3060 had a weak GPU, 5070 won't have.
As usual, you have no clue what you're talking about. That's not how any of it works.

VRAM usage is 90% textures, and texture quality settings in games do not affect framerate. The only thing that stops you from turning texture quality settings to max is whether you have enough VRAM for it, nothing else.

It doesn't matter how weak a GPU core is, it can always benefit from more VRAM by using higher texture quality settings. The 3060 makes full use of its 12 GB of VRAM for textures. It can use higher texture quality settings than the 3060 Ti, 3070 and 3080, and match the 3080 12 GB and 3080 Ti in texture quality, because once again texture quality doesn't affect FPS so it doesn't matter how fast the GPU is. You can give a GTX 1650 (the bottom of the barrel of Nvidia's current lineup) 12 GB or more, and it will benefit from it by using better textures.

There is no reason for Nvidia/AMD/Intel not to give a GPU as much VRAM as they can. Every GPU does benefit from more VRAM. The only reason they give lower-end cards less VRAM is to cut costs, NOT because of this "weaker GPUs can't use it" nonsense.
 
As usual, you have no clue what you're talking about. That's not how any of it works.

VRAM usage is 90% textures, and texture quality settings in games do not affect framerate. The only thing that stops you from turning texture quality settings to max is whether you have enough VRAM for it, nothing else.

It doesn't matter how weak a GPU core is, it can always benefit from more VRAM by using higher texture quality settings. The 3060 makes full use of its 12 GB of VRAM for textures. It can use higher texture quality settings than the 3060 Ti, 3070 and 3080, and match the 3080 12 GB and 3080 Ti in texture quality, because once again texture quality doesn't affect FPS so it doesn't matter how fast the GPU is. You can give a GTX 1650 (the bottom of the barrel of Nvidia's current lineup) 12 GB or more, and it will benefit from it by using better textures.

There is no reason for Nvidia/AMD/Intel not to give a GPU as much VRAM as they can. Every GPU does benefit from more VRAM. The only reason they give lower-end cards less VRAM is to cut costs, NOT because of this "weaker GPUs can't use it" nonsense.
Guess back in the day those 4GB GT 710 cards really benefited from that extra VRAM.
 
Guess back in the day those 4GB GT 710 cards really benefited from that extra VRAM.
Yeah, they literally did. That's how VRAM works. They were still as slow as any other GT 710 card, but they could use higher resolution textures with no penalty to performance.

Isn't it funny that you can make bad faith arguments taking the discussion to ridiculous extremes, and still fail to disprove my point nevertheless?
 
AMD have not been competitive at all outside of rasterization performance only and even here, they loose in most newer games.

Most people buys a GPU looking at the entire featureset, not just rasterization performance.

Also, AMD drivers are worse than Nvidias. Games simply run worse overall, where Nvidia provides a solid good experience across the board.

”The problem seems to be limited to an extremely small number of systems"
Ah right, NVIDIA is always problem free. Oh wait, remember when New World was literally destroying NVIDIA RTX 3090s? Or you know, they have similar problems (sorry for the long link I'm on mobile)
https://www.pcworld.com/article/241...-nvidia-drivers-how-to-avoid-the-problem.html

Why does the comment section here always have to bash AMD drivers. They really haven't been much worse or better than NVIDIAs for ages.

As for the rest of the argument it's mostly nonsense. AMDs driver don't run the games worse. Sometimes they're a tiny bit slower with releasing game ready drivers but that's hardly an issue and given how much smaller they are understandable.
 
While I agree with you on paper nVidia are claiming the 5080 will be 10 percent more powerful than a 4090 so not so shabby - but we've heard these sorts of claims before and been disappointed so we'll see.
12GB on a 5070 seems a little anaemic too - we had 12 on 3060's years ago.
Nice. Where do they claim that? I will believe that when I see it :D That would be crazy, because 4090 has like 5000 cores more and they are using 5nm again.
 
As usual, you have no clue what you're talking about. That's not how any of it works.

VRAM usage is 90% textures, and texture quality settings in games do not affect framerate. The only thing that stops you from turning texture quality settings to max is whether you have enough VRAM for it, nothing else.

It doesn't matter how weak a GPU core is, it can always benefit from more VRAM by using higher texture quality settings. The 3060 makes full use of its 12 GB of VRAM for textures. It can use higher texture quality settings than the 3060 Ti, 3070 and 3080, and match the 3080 12 GB and 3080 Ti in texture quality, because once again texture quality doesn't affect FPS so it doesn't matter how fast the GPU is. You can give a GTX 1650 (the bottom of the barrel of Nvidia's current lineup) 12 GB or more, and it will benefit from it by using better textures.

There is no reason for Nvidia/AMD/Intel not to give a GPU as much VRAM as they can. Every GPU does benefit from more VRAM. The only reason they give lower-end cards less VRAM is to cut costs, NOT because of this "weaker GPUs can't use it" nonsense.
VRAM usage is not over 90% textures.

3060 Ti beats 3060 in pretty much all games regardless of settings, even in brand new games at much higher resolution than these GPUs can handle. Those extra 4GB VRAM don't help when GPU is like 40% weaker.

In new games, 3060 Ti performs like a 6700XT. 3060 performs like a 5700XT.
 
Last edited:
Back