Nvidia's RTX 5090 might draw 600W, other Blackwell GPUs could also increase wattage

Daniel Sims

Posts: 1,672   +47
Staff
The big picture: Rising power consumption from generation to generation has drawn concern regarding the future of dedicated desktop graphics cards. The issue has arisen yet again as more rumors surrounding Nvidia's upcoming Blackwell lineup emerge. However, similar rumors about the Ada Lovelace cards proved to be unfounded.

A prominent leaker claims Nvidia's upcoming RTX 5000 series graphics cards will consume significantly more power than their predecessors. The rumors might not represent final TDPs, so readers should take the information with a grain of salt.

According to @kopite7kimi, the higher-end Blackwell GPUs will see the largest wattage increases. The flagship GeForce RTX 5090 could draw 600W, a substantial jump over the 4090's 450W. Meanwhile, the 5080 might rise to 400W compared to the 4080's 320W.

However, Kopite shared similar rumors about the RTX 4000 cards two years ago, which eventually proved untrue. In the months before the Ada Lovelace generation's launch, some feared that its top GPU would consume 600W or even 800W.

The rumored RTX 4000 TDPs gradually dropped throughout 2022 until Nvidia launched the lineup with modest wattage changes compared to RTX 3000. The RTX 4090 consumes the same amount of power as the 3090Ti, the 4080's TDP is identical to the 3080, and the 4070 draws slightly less than the 3070.

Recent rumors surrounding RTX 5000 TDPs might represent Total Board Power or maximum power draw, which can differ from the wattages that most users typically experience. Furthermore, Nvidia might not have finalized the TDPs for Blackwell yet.

Projected release dates for RTX 5000 have shifted back and forth over the last few months, but the latest information suggests that the cards might debut at CES 2025 in January. AMD's upcoming RDNA4 series and Intel's Arc Battlemage lineup could appear around the same time or in late 2024.

Blackwell's flagship RTX 5090, is expected to be around 70 percent faster than the 4090 with 28 GB of VRAM and a 448-bit memory bus. Meanwhile, the 5080 could outperform the 4090 by roughly 10 percent with 16 GB of VRAM and launch before the 5090. All RTX 5000 GPUs except the mainstream desktop RTX 5060 are expected to feature GDDR7 memory. The series is based on TSMC's 5nm 4N EUV node process.

Permalink to story:

 
With no fab improvement, I tend to believe this high power requirement rumor. No matter how you cut it, be it going wider or increasing clock speed to achieve the desired improvement will require more power to get there. Blackwell for AI chips already shown a significant increase in power requirement.
 
"could also increase wattage" The proper term is power which is measured in watts. "...could also increase power" is correct. "Wattage" is like discussing temperature using the measurement. Like: "What Fahrenheit is it outside?" "What Celsius do I cook this?"
 
Right but in some climates (northern), you can run your central heating lower thanks to your PC. So ... not such a bad thing for some!
Summers are longer than winters here, central Europe lol. It would be cool if I was in like the north pole :D
 
Real world the 4090 was WAY more efficient then the 3090ti, while the former usually hits 450w at most in normal operation, the 3090ti easily topped 500w, and could peak at 600w under heavy load.

so the 5090 hitting 600w isnt surprising, it appears to be a significantly larger chip then the 4090.
 
Real world the 4090 was WAY more efficient then the 3090ti, while the former usually hits 450w at most in normal operation, the 3090ti easily topped 500w, and could peak at 600w under heavy load.

so the 5090 hitting 600w isnt surprising, it appears to be a significantly larger chip then the 4090.

IIRC 4090 die is over 600mm^2 on N5. N4 is barely any denser, and since 5090 is still monolithic, to fit the huge number of extra transistors I'm sure it'll hit 700mm^2+. I'm still not believing this power figure, but then again Blackwell B100/200 use a lot more power than Haswell. H100/H200
 
Last edited:
I bought a 3090 FE just at the start of the Crypto graphics card squeeze as it was all I could find in stock at RRP (really I wanted a 3080). I always regretted this card. While it was fast, the power draw noise and heat were ludicrous, I often ended up throttling it to keep everything running quietly and coolly. I'd never buy a super-high-end card again. In my opinion, they are just not worth the money. Unless you are trying to drive triple screen gaming on 4k panels etc, you're much better off dropping down at least one SKU - so 3080, 4080(or even 4070) etc. With the cost saving you can upgrade as soon as the next gen come out for less than you would have spent and still end up with change.
 
I bought a 3090 FE just at the start of the Crypto graphics card squeeze as it was all I could find in stock at RRP (really I wanted a 3080). I always regretted this card. While it was fast, the power draw noise and heat were ludicrous, I often ended up throttling it to keep everything running quietly and coolly. I'd never buy a super-high-end card again. In my opinion, they are just not worth the money. Unless you are trying to drive triple screen gaming on 4k panels etc, you're much better off dropping down at least one SKU - so 3080, 4080(or even 4070) etc. With the cost saving you can upgrade as soon as the next gen come out for less than you would have spent and still end up with change.

Yup.. Unless you've got a lot of cash to splash (or even if you do), stepping down a tier or two gives you much of the benefits without the cost.. giving you a bit more to spend on the next upgrade, or on other components. You'll also get a much quieter and cooler card, without having to spend a lot on a new PSU to keep up.
 
With no fab improvement, I tend to believe this high power requirement rumor.
From the article: "However, Kopite shared similar rumors about the RTX 4000 cards two years ago, which eventually proved untrue...."

And NVidia isn't using the same process name. AFAIK, they've gone from N4 to N4P which, according to TSMC, has 22% better power efficiency.
 
Nope! No thanks NVidia. Bring a card to market that beats the 4070 while using LESS power, and I'll take a look..
Why would you look at a 5090 for a card that beats the 4070 using less power? That's just silly.

Perhaps you should look at a 5070?
IIRC 4090 die is over 600mm^2 on N5. N4 is barely any denser, and since 5090 is still monolithic, to fit the huge number of extra transistors I'm sure it'll hit 700mm^2+. I'm still not believing this power figure, but then again Blackwell B100/200 use a lot more power than Haswell. H100/H200
Well, the 5090 is a 512 bit bus, and if nvidia follows past examples, theyll probably have a roughly 30% larger GPU to go with.

It's gonna be a tesla sized monster. Cant wait!
 
The 5090 is also rumored to be getting a substantial price increase. The power draw, I really don't think that will really be that high, but the price increase to around $2k, I could definitely see that happening.
 
"could also increase wattage" The proper term is power which is measured in watts. "...could also increase power" is correct. "Wattage" is like discussing temperature using the measurement. Like: "What Fahrenheit is it outside?" "What Celsius do I cook this?"
ABSOLUTELY, Sir.
There is no such thing as 'wattage', it is a terrible misnomer.

Watts, are the units, of power.

In fact when I'm reading an article or something to someone with 'wattage' stated, I both think, and say, power...
 
Back