The RTX 5090 uses Nvidia's biggest die since the RTX 2080 Ti

Daniel Sims

Posts: 1,672   +47
Staff
Highly anticipated: As the unveiling of consumer Blackwells draws near, clear images of Nvidia's next-generation graphics cards are beginning to materialize. The new lineup's flagship product will undoubtedly set new performance benchmarks, but the latest information suggests that it will also use one of the biggest chips in Nvidia's history.

Trusted leaker "MEGAsizeGPU" recently claimed that Nvidia's upcoming GB202 graphics processor, which will power the GeForce RTX 5090, uses a 24mm x 31mm die. If the report is accurate, it might support earlier rumors claiming the graphics card will retail for nearly $2,000.

A 744mm² die would make the GB202 22 percent larger than the RTX 4090's 619mm² AD102 GPU. It would also be the company's largest die since the TU102, which measured 754mm² and served as the core of the RTX 2080 Ti, released in 2018.

Although the RTX 5090 utilizes a larger GPU than the 4090, earlier reports indicated that the final consumer product will be smaller than its predecessor, as it won't use the entire die. The 4090's monstrous profile primarily stems from its triple-slot cooler. In contrast, the Blackwell flagship will have a more typical two-slot design featuring dual fans to fit a broader range of chassis.

However, all other known specs suggest that the 5090 represents a substantial leap forward. With 21,760 CUDA cores and 32GB of 28GB/s GDDR7 VRAM on a 512-bit bus, it should offer an estimated 70 percent performance boost over the 4090. The clock speed remains unclear, but the card features a 600W TBP. Connectivity support includes PCIe 5.0 and DisplayPort 2.1a.

The second Blackwell GPU, GB203, will power the RTX 5080 and 5070 Ti. While the 5080 utilizes the whole die with 10,712 cores, 16GB of VRAM on a 256-bit bus, and a 400W TBP, the 5070 Ti is cut down to 8,690 cores and draws 300W.

The RTX 5070 will also use the full GB205 die, with 6,400 cores and 12GB of VRAM on a 192-bit bus. Reports indicate that the 5090, 5080, 5070, and 5070 Ti will debut at CES in January, featuring GDDR7 VRAM. The RTX 5060 will likely launch a few weeks later.

AMD is also expected to unveil the Radeon RX 8000, codenamed RDNA4, at CES, while Intel might reveal its Arc Battlemage GPUs next month.

Permalink to story:

 
I’m really enjoying the fact we have all this competition in the GPU marketplace between the big 3. If anyone remembers just a couple of years ago it was Nvidia dominating everything. Same goes for the cpu marketplace between the years of like 2009-2017ish Intel dominated AMD. This is all good for business folks!
 
Ahh, the 5070, the specs say it's a 60 class card being sold as entry level high-end. I guess charging more for the 60 class cards weren't enough, they had to increase margin by renaming them as 70 class card's. 192bit bus....

I know the memory bandwidth is faster, but I don't think people understand how much buswidth can impact performance in some games. It's not as simple as A+B=C. There are instances where bus width means more than just total bandwidth. This is especially true now that upscaling is now a necessary-yet-unnecessary reality we live in.

The 192bit bus used to he reserved for 60 class card's and 128 bit for the 50 class card's.

The amount of shrink-flation in modern graphics cards is offensive. Either make the cards slower to match a price or raise the price to match the bill of materials. Don't do both. I don't think people are angry at just the increase in cost. I think they're getting angry at getting less for more money.

I'm sure someone will reply that it's AMDs fault which is funny because they were very competitive on price on the 6000 series and they sold worse than the 7000 series that didn't compete on price. So consumers basically showed AMD that they're going to buy nVida regardless of how competitive they are.
 
I’m really enjoying the fact we have all this competition in the GPU marketplace between the big 3. If anyone remembers just a couple of years ago it was Nvidia dominating everything. Same goes for the cpu marketplace between the years of like 2009-2017ish Intel dominated AMD. This is all good for business folks!
Are you broken?

IMO there's no competition in the GPU space. It's Nvidia charging whatever they want and delivering less each refresh while upping the prices, with the other two companies getting a participation medal.

The market is very heavily Nvidia dominated and everything coming out seems to have a semi for DLSS, and reviews all panda to it as well. Game developers target Nvidia for optimising because they have around 80% of the install base using their hardware (Ref: Steam Hardware Survey).

AMD do have competing products to the RTX 4080, 4070 and 4060 class cards. Intel has 4050/4060 class hardware too, but neither company get a sale over team green 9 times out of 10 because you **must** have DLSS - Pay the extra is all I see these days in every review.

So, to bring it back full circle: Nvidia will charge whatever they want next release. Prices will go up. They will sell out. Their competitors won't gain market share because they are portray negatively by reviewers, the media and game developers. Consumers won't care what anyone else does even when competitive because it's not an Nvidia product. Sales already cement this is happening when they're given the choice.
 
So the gap between the best and the worst is going to continue to widen and so are the prices.
Nothing comes close to the 4090 today and the 5090 is rumored to be 70% faster? If so that is absolutely insane.
People with mid range cards today are already unable to play ue5 games comfortably since the base performance requirements are so high.
Then there's the trend of relying on upscaling to get playable framerates in the first place, which makes everything even worse.
There are no bang for buck options. It's not a fun landscape imo.
You can buy a 3050 for 250$, which is all but an obsolete card, you can't comfortably play games with it.
Doesn't look like this is going to get better with the new Gen :/

I remember the days where you could buy two mid range cards for 150 a piece, and with SLI get better performance than the 500$ flag ship.
Today you're paying so much more and you're still coming up short.
 
Last edited:
Are you broken?

IMO there's no competition in the GPU space. It's Nvidia charging whatever they want and delivering less each refresh while upping the prices, with the other two companies getting a participation medal.

The market is very heavily Nvidia dominated and everything coming out seems to have a semi for DLSS, and reviews all panda to it as well. Game developers target Nvidia for optimising because they have around 80% of the install base using their hardware (Ref: Steam Hardware Survey).

AMD do have competing products to the RTX 4080, 4070 and 4060 class cards. Intel has 4050/4060 class hardware too, but neither company get a sale over team green 9 times out of 10 because you **must** have DLSS - Pay the extra is all I see these days in every review.

So, to bring it back full circle: Nvidia will charge whatever they want next release. Prices will go up. They will sell out. Their competitors won't gain market share because they are portray negatively by reviewers, the media and game developers. Consumers won't care what anyone else does even when competitive because it's not an Nvidia product. Sales already cement this is happening when they're given the choice.

DLSS has been really bad for consumers but the sheeps lap it up.
 
DLSS has been really bad for consumers but the sheeps lap it up.
People seem to forget that their new cards won't support new DLSS so they're stuck using FSR if they decide to stick with their last gen cards. When DLSS 4 comes out, it won't matter for 40 series users just as 30 series users didn't benefit from DLSS 3. Under ideal circumstances, they could benefit from it but you're on a 30 series card, and many are, you're using FSR now. Very few games are developed under ideal circumstances.

So that 4090 people have? Better cozy up to FSR if you plan skipping the 50 series just as 3090 owners had to if they skipped the 40 series.

The situation is so much worse when you include the constant gen specific tech that becomes irrelevant as soon as the new cards comes out. To me, the way nVida handles DLSS makes it a reason to avoid their cards, not a buying point. Then with these fraction gains we see every new generation paired with price increases, I honestly don't know how can be pro nVidia unless they need to be on the cutting edge of tech for work or something. If you run a business or something where this tech is necessary for you to be market competitive, I 100% am in favor of it. However, it's absolutely toxic for consumers. Playing COD is not a business, there is no being market competitive in videogames. Gaming is a hobby, the idea that people stress themselves out trying to be "pro gamers" with nothing more than a leader board number to show for it is silly. What happened to fun?

The whole industry has become toxic in more ways than I'm able to list. I remember gaming as something I used to do when I was too tired after work to do anything or wanted to SAVE money so I'd just stay in and play skyrim all weekend or something. This mentality where people take on second jobs so they can AFFORD TO play games is just absurd.
 
DLSS has been really bad for consumers but the sheeps lap it up.
Yeah I miss the old, much blurrier, TAA we've been using for the last 10-15 years.

I do hope AMD sort out their RT performance though, and I'm really looking forward to seeing what they do with FSR with dedicated hardward acceleration.
 
Ahh, the 5070, the specs say it's a 60 class card being sold as entry level high-end. I guess charging more for the 60 class cards weren't enough, they had to increase margin by renaming them as 70 class card's. 192bit bus....

I know the memory bandwidth is faster, but I don't think people understand how much buswidth can impact performance in some games. It's not as simple as A+B=C. There are instances where bus width means more than just total bandwidth. This is especially true now that upscaling is now a necessary-yet-unnecessary reality we live in.

The 192bit bus used to he reserved for 60 class card's and 128 bit for the 50 class card's.

The amount of shrink-flation in modern graphics cards is offensive. Either make the cards slower to match a price or raise the price to match the bill of materials. Don't do both. I don't think people are angry at just the increase in cost. I think they're getting angry at getting less for more money.
The geforce 8600GT was a 128 bit card. The geforce 8800 ultra was a 256 bit card that cost $1100 after inflation. So can we stop crying like children over the 60 series having 192 bits now? Or are we going to have yet another generation of people whining about how they cant buy a 512 bit 5090 for $500 like its 2009?

Besides, bits do NOT matter. Performance does. IF a 192 bit bus provides enough bandwidth for these cards to run faster then their predecessors, then so be it. You can complain about "shrinkflation", but you're just plain wrong. These cards are HUGE, and that will come at a price. Welcome to hyperinflation. This is what happens when you double your money supply.

Or just look at nvidia's margins, then figure that their AI datacenter business has increased revenue 5x fold yet their margin hasnt increased by double digits yet.

Just get a 4070ti and enjoy your games or something.
I'm sure someone will reply that it's AMDs fault which is funny because they were very competitive on price on the 6000 series and they sold worse than the 7000 series that didn't compete on price. So consumers basically showed AMD that they're going to buy nVida regardless of how competitive they are.
AMD themselves have long admitted that the RX 6000 series was de prioritized for EPYC production during the pandemic. Somehow this is the consumers fault. You COULD NOT find a RX 6000 card throughout 2021 and into 2022. But again, this is the fault of consumers, for some reason.

SMH.
 
People seem to forget that their new cards won't support new DLSS so they're stuck using FSR if they decide to stick with their last gen cards. When DLSS 4 comes out, it won't matter for 40 series users just as 30 series users didn't benefit from DLSS 3. Under ideal circumstances, they could benefit from it but you're on a 30 series card, and many are, you're using FSR now. Very few games are developed under ideal circumstances.

So that 4090 people have? Better cozy up to FSR if you plan skipping the 50 series just as 3090 owners had to if they skipped the 40 series.
That's just you completely missunderstanding AND, to be fair to you, Nvidia's naming has been rubbish.

The only thing the 40 series could do over any of the older cards was run Frame Generation.
Ray-Reconstruction, DLSS, DLAA etc... They all continued working with all generations of RTX cards.
Edit: Ray-Reconstruction was a brand new feature of DLSS 3.5, so goes to show how bad the naming is.
 
The geforce 8600GT was a 128 bit card. The geforce 8800 ultra was a 256 bit card that cost $1100 after inflation. So can we stop crying like children over the 60 series having 192 bits now? Or are we going to have yet another generation of people whining about how they cant buy a 512 bit 5090 for $500 like its 2009?

Besides, bits do NOT matter. Performance does. IF a 192 bit bus provides enough bandwidth for these cards to run faster then their predecessors, then so be it. You can complain about "shrinkflation", but you're just plain wrong. These cards are HUGE, and that will come at a price. Welcome to hyperinflation. This is what happens when you double your money supply.
Adding buswidth essentialy just adds tracing on the PCB. That's an oversimplification, but increasing buswidth doesn't add much cost. The problem is that some of these features needs 32 and 64bits of bus width to work properly. So now you're waiting for enough bus to open up for the feature to function. You're delaying data that could be used for rendering to make room for upscaling or ray tracing. It gets delayed by a clock cycle or, in some instances, needs to be used every clock cycle. So that 192bit bus on the 5070? It's realisticly a 160 bit bus. When you get down to cards with a 128 bit bus, because of how many things that card needs to do just to function, it really only has 96 bits available. So the faster bandwidth helps, but the smaller the buswidth gets the bigger impact these "necessary features" have on the over all performance of the card. This is why you see massive gains when running DLSS on things the like 80 and 90 class card's but you sometimes see single digit gains on lower spec cards.
Just get a 4070ti and enjoy your games or something.
Their linux support is trash and that's really why an nVidia product will never be an option for me. They said they're working on improving it, but I'll believe it when I see it. For the time being, ARC cards have better linux driver support than nvidia cards. My next card is likely going to be an 8800xt or a used 7900xt/xtx if the price is right.
AMD themselves have long admitted that the RX 6000 series was de prioritized for EPYC production during the pandemic. Somehow this is the consumers fault. You COULD NOT find a RX 6000 card throughout 2021 and into 2022. But again, this is the fault of consumers, for some reason.

SMH.
They admitted they skipped on development and marketing, they did issue plenty of incentives where the scalpers bought them at Rock bottom prices. Then we had the supply chain issues hit shortly after then everyone could get no new cards to market from either party.
That's just you completely missunderstanding AND, to be fair to you, Nvidia's naming has been rubbish.

The only thing the 40 series could do over any of the older cards was run Frame Generation.
Ray-Reconstruction, DLSS, DLAA etc... They all continued working with all generations of RTX cards.
Edit: Ray-Reconstruction was a brand new feature of DLSS 3.5, so goes to show how bad the naming is.
Frame gen is now considered an essential feature and if you still have a 30 series card, you're gonna need to use FSR to get it. I can't wait to see what new features DLSS 4 brings with the 50 series that 40 series users will need to start using FSR to get.
 
Frame gen is now considered an essential feature and if you still have a 30 series card, you're gonna need to use FSR to get it. I can't wait to see what new features DLSS 4 brings with the 50 series that 40 series users will need to start using FSR to get.
No, no it's definitely not considered an essential feature, it does not help low framerates.
30fps with framegen on feels absolutely awful, you have to be at a minimum of 60fps and it still feels pretty awful. 75fps or above, then you can turn framegen on and it doesn't feel too bad.

An essential feature? Please, it barely does anything useful.
 
No, no it's definitely not considered an essential feature, it does not help low framerates.
30fps with framegen on feels absolutely awful, you have to be at a minimum of 60fps and it still feels pretty awful. 75fps or above, then you can turn framegen on and it doesn't feel too bad.

An essential feature? Please, it barely does anything useful.
the issue is that the people who are struggling to get 60FPS are the ones using it the most even if it isn't ideal. So 60 class cards users, the vast majority of card owners, are the ones who use it the most. So we're stuck in a situation where the majority of cards being sold can't play games and have to turn to things like framegen to get close to 60FPS. The problem isn't that you shouldn't turn it on if you're getting less than 60FPS, the problem is that people are turning it on anyway to get a playable experience. And, really, the impact on performance could be comparable to early LCDs with high latency which people played on for years anyway. We didn't really exit "the dark times" of LCD gaming until well into the Mid 2010s and that experience was considered highend at the time. Put a 2015 gaming monitor next to a display made in the last 5 years and that's basically the difference you feel when turning on frame gen. That's a perfectly acceptable sacrifice for many people in today's market.

And while people talk about the increase in products is alligned with inflation. The increase in wages has not lined up with costs and that's the real issue. It's a nice cop-out argument until people are essentially making half as much as they were 5 years ago.
 
I’m really enjoying the fact we have all this competition in the GPU marketplace between the big 3. If anyone remembers just a couple of years ago it was Nvidia dominating everything. Same goes for the cpu marketplace between the years of like 2009-2017ish Intel dominated AMD. This is all good for business folks!
Is that sarcastic? AMD gpu share is collapsing, and Intel is pulling out of the GPU space. If there was competition a 5090 will be more around $1000. Funnily enough Nvidia won on software mostly, at least in the realm of CUDA. IS there going to be serious competition? Maybe... but the more distance we get on the software stacks the more difficult it will be. We are talking about 30 to 50 billion of investment before you take back serious market share. The only way to have competition is if Nvidia turns to Intel... mismanagement and shareholder cannibalisation. The other way is to have some of the big tech people produce their own chips and code... then if they succeed use the chips for games and visual applications.
 
the issue is that the people who are struggling to get 60FPS are the ones using it the most even if it isn't ideal. So 60 class cards users, the vast majority of card owners, are the ones who use it the most. So we're stuck in a situation where the majority of cards being sold can't play games and have to turn to things like framegen to get close to 60FPS. The problem isn't that you shouldn't turn it on if you're getting less than 60FPS, the problem is that people are turning it on anyway to get a playable experience. And, really, the impact on performance could be comparable to early LCDs with high latency which people played on for years anyway. We didn't really exit "the dark times" of LCD gaming until well into the Mid 2010s and that experience was considered highend at the time. Put a 2015 gaming monitor next to a display made in the last 5 years and that's basically the difference you feel when turning on frame gen. That's a perfectly acceptable sacrifice for many people in today's market.

And while people talk about the increase in products is alligned with inflation. The increase in wages has not lined up with costs and that's the real issue. It's a nice cop-out argument until people are essentially making half as much as they were 5 years ago.
RTX will take decades to become realtime 60fps. Keep in mind that what we call raytracing now is an extremely minimal implementation of tracing light. But we are on the right track. For gamers it is all about what we want, 144hz super fast response time or eye candy. I personally take the hit in all single player games, and in competitive I go hardcore medium setting RT off etc.
 
Just remember: EVERYTHING happens for a reason.....

And the reason we are in this convoluted clustercrap sh*tfest is that stupid people continue to do stupid sh*t, like paying nGreediya's outrageous prices for GPU's over & over & over again, which only condones their actions of rising prices & only producing miniscule/pitiful performance improvements with every new generation of cards while constantly charging more for them !

They're just following Intel's lead, who has done the exact same thing for many, many years now....

So.... if you purchased an NV card in the past 5-8 years, this entire situation is YOUR OWN DAMNED FAULT ! Just deal with it :D
 
the issue is that the people who are struggling to get 60FPS are the ones using it the most even if it isn't ideal. So 60 class cards users, the vast majority of card owners, are the ones who use it the most. So we're stuck in a situation where the majority of cards being sold can't play games and have to turn to things like framegen to get close to 60FPS. The problem isn't that you shouldn't turn it on if you're getting less than 60FPS, the problem is that people are turning it on anyway to get a playable experience. And, really, the impact on performance could be comparable to early LCDs with high latency which people played on for years anyway. We didn't really exit "the dark times" of LCD gaming until well into the Mid 2010s and that experience was considered highend at the time. Put a 2015 gaming monitor next to a display made in the last 5 years and that's basically the difference you feel when turning on frame gen. That's a perfectly acceptable sacrifice for many people in today's market.
It feels much worse than the old slow LCD's, MUCH worse, as someone who lived all through it as well, in-fact my misses has a 1080p 60Hz screen from 2010 (and it was the cheapest she could find at the time) and she has a 3060Ti, framegen at 30fps is practically unusable. It feels like your aiming some jelly or someone spiked your drink, It's absolutely horrendous, Turning down some settings to get a better framerate, then turning on framegen can be okay, but it still never gets remotely close to feeling like the framerate being output, 60fps looks smoother, but still feels like 60fps, but at least doesn't feel like squeezing wet noodles.
And while people talk about the increase in products is alligned with inflation. The increase in wages has not lined up with costs and that's the real issue. It's a nice cop-out argument until people are essentially making half as much as they were 5 years ago.
Oh this I totally agree with, I'm not advocating for more expensive GPU's or anything like that, I was just pointing out that actually, the only DLSS feature that doesn't work on prior RTX cards is framegen, and it's really not a very useful feature unless you already have fairly decent framerates. Hardware unboxed I'm sure did a deep dive on framegen several times, and they found the same thing I did, low framerates make the feature practically unbearable, it shouldn't even be allowed on below 60fps.
 
744mm² is massive. The manufacturing cost of the die is huge, and GDDR7 is very expensive too.

Even with a good yield you'll be lucky to come away with 50 or 60 dies on what must be a $25,000 wafer. Napkin maths says at least $400 a die, before you factor in anything else at all.
 
Adding buswidth essentialy just adds tracing on the PCB. That's an oversimplification, but increasing buswidth doesn't add much cost. The problem is that some of these features needs 32 and 64bits of bus width to work properly. So now you're waiting for enough bus to open up for the feature to function. You're delaying data that could be used for rendering to make room for upscaling or ray tracing. It gets delayed by a clock cycle or, in some instances, needs to be used every clock cycle. So that 192bit bus on the 5070? It's realisticly a 160 bit bus. When you get down to cards with a 128 bit bus, because of how many things that card needs to do just to function, it really only has 96 bits available. So the faster bandwidth helps, but the smaller the buswidth gets the bigger impact these "necessary features" have on the over all performance of the card. This is why you see massive gains when running DLSS on things the like 80 and 90 class card's but you sometimes see single digit gains on lower spec cards.
I would LOVE to see the technical documentation and testing that proves that these features need 32 or 64 of whatever bits dedicated to JUST that tech to work properly.

The issue, of course, is if that were true, then cards like the 4060 would be totally disproportionate in their performance to higher end cards. Except we dont see that. Those cards are right where you would expect them to be, given their core counts..

Operations on GPUs have been split into multiple clock cycles since the days of the voodoos. It's not like that stuff just stops working.

I seem to remember similar claims back in the day, that the smaller busses could NEVER fill the needs of these newer GPUs and you NEEDED 384/512 bit for high end resolutions. And yet, they were not only fine, but they kicked the *** out of older cards. Cards like the GTX 660, with a 192 bit bus, beating the 384 bit GTX 480 in benchmarks. Funny how that works.
Their linux support is trash and that's really why an nVidia product will never be an option for me. They said they're working on improving it, but I'll believe it when I see it. For the time being, ARC cards have better linux driver support than nvidia cards. My next card is likely going to be an 8800xt or a used 7900xt/xtx if the price is right.
Their drivers work fine on linux. I assume you're talking about the open source drivers. That's a totally lost cause and frankly makes no sense. "Oh I cant use a closed source driver, that's PROPRIETARY! I need OPEN drivers to run my proprietary gaming software!".
They admitted they skipped on development and marketing, they did issue plenty of incentives where the scalpers bought them at Rock bottom prices. Then we had the supply chain issues hit shortly after then everyone could get no new cards to market from either party.
Oh, so again, it wasnt the fault of the consumer, but of the market itself. Interesting.
Frame gen is now considered an essential feature and if you still have a 30 series card, you're gonna need to use FSR to get it. I can't wait to see what new features DLSS 4 brings with the 50 series that 40 series users will need to start using FSR to get.
That's your opinion. FSR is also considered far inferior to many compared to DLSS. Especially with text. I've tried FSR in games, and holy hell it looks awful.
Just remember: EVERYTHING happens for a reason.....

And the reason we are in this convoluted clustercrap sh*tfest is that stupid people continue to do stupid sh*t, like paying nGreediya's outrageous prices for GPU's over & over & over again, which only condones their actions of rising prices & only producing miniscule/pitiful performance improvements with every new generation of cards while constantly charging more for them !

They're just following Intel's lead, who has done the exact same thing for many, many years now....

So.... if you purchased an NV card in the past 5-8 years, this entire situation is YOUR OWN DAMNED FAULT ! Just deal with it
Yeah, consumers should have bought inferior hardware with worse drivers because......good cards bad?

Stop stanning for AMD. They dont care about you. AMD's GPU side has been ALL over the place, dropping out of markets or delaying products or screwing things up. Consumers like consistency, hence why those same consumers that LOVE "ngreedia" cant get enough ryzen processors. Oh wait, those are AMD. Hmmmm..... (BTW, intel held a monopoly on consumer CPUs for a decade, and kept their price increases in line with inflation. AMD gets ahead, and the MOMENT they had the advantage, jack up the prices on CPUs from $200 for 8 cores to $400+. AMD, the way it's meant to be gouged).

If you have AMD GPUs, like me, and you're on the high end, AMD is leaving you high and dry. Again. This is the second time in a decade they've done this. You know what they say, fool me once shame on you, fool me twice.....

When AMD makes a good product, it sells. The RX 6000s sold as soon as they came in stock, their low numbers were AMD's fault, they (correctly) prioritized Epyc production, at the expense of consumer GPUs. The 7000s, when priced right, sold great. OF course, AMD played those ngreedia games and priced their cards wrong to drive consumers to higher end cards, and instead drove them to nvidia. oops.
 
Last edited:
Their drivers work fine on linux. I assume you're talking about the open source drivers. That's a totally lost cause and frankly makes no sense. "Oh I cant use a closed source driver, that's PROPRIETARY! I need OPEN drivers to run my proprietary gaming software!".
So I doubt very few people on here have doubled down on Linux as much as I have. I'm going to assume that you aren't making a faux argument and explain why nVidia's linux drivers suck. It has very little to do with open or closed source drivers. Making their drivers open source would only put a band on the problem and not really fix it.

Essentially they don't want to update their drivers to be friendly with new fixes and features that get added over the years. A notorious one is the multiple display issue, the scaling issue. It's a nightmare getting proton to work properly on an nVidia GPU and since it's closed source, you don't know if you're doing something wrong or if the GPU is having an issue. Then there are the memory leaks. I could not game for more than 2 hours on an nVidia GPU because it would just fill up my RAM. For somereason their drivers would be eating up over 40gigs of RAM after just a couple hours if I was gaming. The solution on that build? Go from 64 to 128gigs so I could game for 4-5 hours instead of just 2 before needing to restart or risking a system crash. Then there is just an inconsistency in how things works. Sometimes it would work in one program and not the other. I remember one issue where CAD software would make the GUI crash if I launched firefox BEFORE I opened the CAD software. It didn't matter if it was FreeCAD or LibreCAD. It was just anything CAD. And it's not that I had to CLOSE firefox before opening CAD. It's that if firefox was running on machine at anypoint during the day before I opened CAD then it would make the GUI crash. So anytime I wanted to work in CAD, I had to restart. If I booted CAD and then opened firefox it was fine. If at any point I closed CAD after that then it would require a full system restart if I wanted to open back up again. So I receive a CAD file in my email I want to look at? Well I have to download it and then restart before opening it.

My 1080ti at the time worked fine in windows but fought with Linux like hell. I replaced it with a 6700XT and it's been smooth sailing ever since. Frankly, I can't believe I put up with that for as long as I did but gaming finally took the back burner on my computing interests.
 
I would LOVE to see the technical documentation and testing that proves that these features need 32 or 64 of whatever bits dedicated to JUST that tech to work properly.

There’s no technical documentation because everything he said is false. Modern GPU memory buses work in multiples of 32 bits. There’s nothing “better” about a 256-bit bus vs a 192-bit bus except for total bandwidth. You can get the same result by increasing clocks which is exactly what GDDR7 does.
 
Stop stanning for AMD. They dont care about you. AMD's GPU side has been ALL over the place, dropping out of markets or delaying products or screwing things up. Consumers like consistency, hence why those same consumers that LOVE "ngreedia" cant get enough ryzen processors. Oh wait, those are AMD. Hmmmm..... (BTW, intel held a monopoly on consumer CPUs for a decade, and kept their price increases in line with inflation. AMD gets ahead, and the MOMENT they had the advantage, jack up the prices on CPUs from $200 for 8 cores to $400+. AMD, the way it's meant to be gouged).

Oops, close but fumbled at the end.

Cheapest AMD 8 core MSRPs at current-gen release were $299.
Some AMD 8-core MSRPs at release have been $329.
Current AMD 8-core MSRP at release: Ryzen 7 9700 is $359.

Prices go up sometimes but not double as you imply. Sorry if that ruins the narrative.
 
Back