What was the experience last round when the 2070s came out? I’m wondering if both 3080 and 3070 supply may improve around the 3070 release?

Jay’s video showed that his MSI Gaming X Trio had 2 clusters of the better capacitors, just like the FE cards. I have the same card (day 1 card), and mine only has 1 cluster of the better capacitors. Makes me wonder if MSI discovered issues in their original build and made a running change. I have not had any crashes, but I’ve been having other build issues that have kept me from gaming as much as I’d have liked.

https://forums.evga.com/Message-about-EVGA-GeForce-RTX-3080-POSCAPs-m3095238.aspx

image

Makes you wonder if that is why the stock for most cards is pretty much non-existent.

Doubt it because most of the other AIBs seem have said “fuck it” and shipped with their faulty designs anyways.

I’m pretty sure they’re building to reference spec, so I’d blame Nvidia for a) a bad spec and b) taking so long to get drivers to their partners so they could push these cards and find these issues.

No, and no real hurry, in fact - my CPU and motherboard are just 10 years old so I’m looking to do a significant upgrade when it makes sense to do so. If that’s not now, I don’t mind waiting a few weeks or even until 2021. Ideally before end of year though. I’ll probably be somewhat lame and just get an Alienware again.

Appreciate the heads up, guys - I don’t follow this stuff closely any more.

So that EVGA post pretty clearly shows Nvidia fucked everyone over by trying to be overly secretive / misleading in benchmarks and not giving drivers to OEMs until announcement day.

Seems like Nvidia was rushing to beat AMD to the punch. When Big Navi is revealed I’m curious if we’ll see if they had good reason to do so or if they’re still only competitive in the low/midrange. I’m still probably getting a 3080 here but I feel less bad about missing out on my chance to snag one at launch.

There are basically two brands you want to avoid, Gigabyte and Zotac. They have 0 MLCC caps. Zotac already underclocks so instability won’t be an issue, but you’re paying MSRP for a slower card.

MSI in theory does use 1 MLCC cap, but there are widespread reports of instability with MSI cards, so perhaps something else is going on there. I wouldn’t buy MSI either. Also MSI has a plastic backplate.

I wonder if this will affect 3070 cards in a similar way.

With lower power draws for the 3070, maybe not? But I’m not an EE guy. I’m sure there will be multiple YouTube teardowns giving the 3070 a full colonoscopy.

Yeah, same. While I’ve been curious heading into these launch windows, Nvdia’s mistakes have certainly piqued my interest that much more in AMD’s 6xxx series.

Makes me wonder what the AIB companies are saving by using the cheaper caps per board? When you’re talking $100s and $100s per card, why not using better components and reduce your tech calls, your returns, protect your company’s brand name?

From what I understand it’s not a matter of cheap vs better, they’re better at different things. Nvidia spec said either should work.

From the post I linked above (I’m not an electrical engineer):

Nvidia’s specs required one MLCC and 5 POSCAPs, so anyone with 6 did not follow their specs.

Thanks for the clarification!

A sort of tangent, ASUS (or was it MSI? I forget now) had lagging UEFI BIOS update support for their new Ryzen motherboards at the launch of Zen because one of their notable-in-the-community staff had departed. Months after launch they were still missing support. You don’t get this sort of gossip or insight unless you follow the subreddits and forums.

Some info about RDNA2 leaked in the latest MacOS beta, including clockspeeds. As previously leaked we’re looking at 40CU and 80CU variants, but the clocks are much higher-- the 40CU SKU up to 2.5Ghz (31% over the 5700XT at 1.9Ghz) and the 80CU to 2.2Ghz.

Even if RDNA2 offers zero IPC improvements, the CU counts and clocks would make them extremely attractive, and the 40CU model could be priced really cheap. Could be $399 and hose the 3070.

Oh, and the highest-end 80CU model is 238w TDP. Probably will be higher than that as it’s the socket TDP not the board, and memory uses power too, but I imagine it will come in well under 300w overall.

Buildzoid/Actually Hardcore Overclocking (Gamers Nexus partner) with a DEEP dive into the capacitor issue with the 3080 (also includes a funny mini-rant about the use of the term poscaps, which is actually a manufacturer-specific series of the problematic capacitors):