Build a new PC, learn to hate hardware vendors

A fair warning: this isn’t going to be terse and to the point. It’s a story (but packed with information, more or less useful).

A lesson: you’ll learn how the exact same hardware configuration and same software configuration and settings can still gimp your PC up to 10% of its speed. And you can’t do anything about it.

A few years ago I had to buy an LCD monitor. I was still using old, bulky CRTs and I was horrified when I tried side by side with a standard LCD. To summarize, I have a big problem with a number of aspects, but especially color banding. That is a thing in MOST consumer LCD monitors, because they “cheat” the spec and tell you they are 8bit monitor when instead, all of them, are 6bit + FRC, or a way to “simulate” a higher number of colors, leading to horrid color banding. 10bit color HDR monitors don’t have this shit, but of course the price tag is different…

But none of that matters, the important part is that when it was overdue time to buy the LCD I decided to go with one of the most common models by DELL, that usually had pretty good reviews and was generally recommended. Turns out the monitor is kind of bad, and after some research I found out the problem: starting from a certain point in time, DELL replaced the actual internal panel with a cheaper one from a completely different producer. I bought the EXACT same model, same ID that got good reviews for years. And so I learn this new lesson: sometime hardware vendors change the product and sell you something else that it’s not what you think it is. And it’s not simple to find out, unless you take the thing apart yourself…

There were then variations on the same. For example I remember that when ATI pushed out the r9 290 they sent to reviewers special selected models that performed much differently from the actual retail ones that people bought. And all the problems with thermal throttling and bios updates that followed.

Now I’m building a new system, to replace… a dual core E8400, 4Gb ram. An overdue update.

Since prices are a bit crazy right now, I decided to stay in the previous generation but aim relatively higher than I normally would. So Intel i9 10900k(f). Coming from another PC I have (but not use), it’s a 4770k, relatively recent. It has 4 cores, 3.5Ghz and 3.9 in turbo. It also overclocks fairly easy even if temperature goes up fast, but in general it can do 4.2 even on air.

But what I knew, up to this point, is that when CPU is busy it can easily stay at its 3.9 turbo and sustain it. It’s a real speed. What’s advertised is what you get.

Now I’m reading about the 10900k, and it’s a mess. It starts at 3.700 by default, so just 200+ from the base 4770k, but nominally it goes up to 5.3! But it’s a totally different story from the 3.9 turbo of the 4770k.

It basically turns your home PC into a phone, where the performance written on paper is useless, because it can keep it only in very short bursts before thermal throttling kicks in, and only on 2 of the 10 cores. The performance on all cores is set at 4.8, but that too is a lie, because it’s all still burst-limited. It can go to 4.8 on all cores… for a minute or less.

And then come in the motherboards manufacturers, that by default enable their new performance mode to “remove the limits”. This is called MCE (multi core enhancement) and it’s based on a terrible idea because removing those limits means the CPU is going to suck unreasonable amount of energy, get extremely hot very quickly, and then throttle back FASTER than it did with the default settings. The result is you consume more energy while losing performance. INNOVATION!

So, I bought a 10900k, and I have no idea of what actual speed it can ACTUALLY sustain. It’s going to be trial and error, and learn technical details that look more like alchemy. Setting obscure P2 states and all that stuff.

This CPU is a cheater.

But the problem here isn’t a CPU, and it isn’t even the problem of specific hardware parts that perform better or worse due to pure luck known as “silicon lottery”. The theme here is something I just discovered, that seems to make a rather big difference, but that no one usually considers.


What do we generally know? Nothing too complex. One usually goes for as much RAM capacity one can reasonably afford. Then you look at the max frequency, since that provides the bandwidth, and then, maybe you look at timings. It gets tricky because there isn’t any known formula to decide when lower timings are better than frequency. But that’s all, right?


No one gives a shit about these things, because when you look at a benchmark the difference is minimal:


See? That’s +3 fps, going from 3600 to 4000. Who cares?

That’s again, why no one cares about RAM. You get the amount you want, find a decent frequency with the budget you have, and done. No one will ever notice the difference.

But there’s this other thing that not many know about. You know about single channel, and dual channel. These days you generally can buy just dual channel modules. They come in two. It’s generally for the best. This because you can go to four, but there’s no difference because they are still in dual channel. And so you stick to two, because two are usually easier to kept stable than four.

What you might not know about is… single rank and dual rank.

The greater absurdity here is that THERE’S NO WAY to find out, when you buy a module. It’s a surprise. Either you find out by looking at a special code (but that no online shop will show you), or you’ll have to plug in the memory and use a software to probe the module itself.

What’s single rank and dual rank? Just the way the memory chips are arranged on the memory module. But the big deal is that it makes a rather significant DIFFERENCE IN PERFORMANCE. Because dual rank has basically a broader parallel access to the modules, so it is faster. Real world faster, not some benchmarks. I’ve seen tests where single rank can lose up to 10 fps, and that’s WAY MORE than the typical difference shown above.

What makes it worse is that the EXACT same memory module, same producer, same ID, can be single rank or dual rank. On paper, single rank is “newer” and better. It means that the actual memory chips are twice as large. They work better. But since they get access in single rank, they end up slower.

Now… Manufactures move from dual rank to single rank. Because for them is more efficient to produce. And more and more people (like me) find out that what until yesterday was a standard dual rank model, today becomes single rank, ending with a much different performance despite you bought the EXACT same hardware part, with the same price. Welcome, your brand new pc is now 10% slower. Same modules, same prices… at some point the hardware vendors made a sneaky update and started selling something that is 10% slower. And almost no one is aware of this, while maybe still fretting about frequency and timers that are a drop in the ocean when it comes to performance.

This is especially hitting what I suppose is now one of the most common, quite mainstream target: 16x2 Gb modules. It’s these that are now commonly “upgraded” to single rank.

It also makes the whole deal with dual channel more complex, because single/dual rank applies across the board. FOUR single rank modules perform like TWO dual rank ones. This means that if your memory is single rank YOU GET MORE BANDWIDTH by having FOUR modules onboard instead of two. This throws out of the window the rule that once you are in dual channel there’s no difference between two or four modules. Four is better (if single rank).

Yet, it might be slightly worse to have four sticks in dual channel, because dual channel stresses a bit more the CPU memory controller and especially with Ryzen this might be a step too far.


  • two sticks in dual rank
  • or four in single rank

The bad:

  • two sticks that are single rank, even if you are in that lovely dual channel

In many cases, with exact same timings, a dual rank 3200 modules performs up to 10% FASTER than a module at 3600, but single rank.

But you don’t know what to buy, because no one cares to tell you whether the module is single or dual rank.

I bought this:

It’s a fairly typical 16x2 3600 with 16CL. You can look at the “specifications” and you’ll find the usual information. But no one will tell if it’s dual or single rank. Neither will the online shops. And you won’t even know, because Crucial produced the exact same module BOTH in dual and single rank.

If it’s dual rank you’ll see the product code, it ends with M16FE1, if it’s single rank it ends with M8FB1. But this isn’t a code that is shown in the shops that sell this. You’ll find out when you receive it.


This is a Crucial rep from a year ago. “We only use Micron tuned die in our Crucial Ballistix line, that’s something we are commited to!”

Yes, because they care!

… Then later that year they moved from the E-die to a B-die, that is single rank and performs significantly worse. Now the greater majority of the models that are circulating are the “updated”, nerfed model. Same price, -10% performance, ACTUAL real use performance, and something only the hardware geeks find out about.

People went crazy when the Meltdown exploit forced a minimal loss of performance, especially outside of specific benchmarks. Now we have something significant that changes from a day to the other, and pretty much no one knows about it. It’s hidden away into some serial code.

Even the difference between different major CPUs isn’t 10%, in many real use cases.

(by the way, this also means that buying 4 sticks with 8Gb each might be significantly better, and cheaper, than buy 2 sticks 16Gb each. With four sticks you have dual rank guaranteed. And it’s also easier to find good timings with smaller modules.)

It’s not limited to PC hardware, it’s a feature of modern manufacturing and widespread competition in the retail space of pretty much any industry that deals in more or less commodity goods. Pharmaceuticals, for instance. Your insurance company in many cases will only pay for generic drugs unless you get a waiver. Those generics are made by a plethora of manufacturers, so the drugs you pick up from the pharmacy one month may be made by a different company the next month. And the formulations can vary a lot even within the parameters that are allowed by the FDA. This can be a problem, and just as with the hardware you describe, it’s virtually impossible to find out the info you need to make good decisions, and even if you had the info, it’s very hard to influence what the insurance company will pay for or what the pharmacy will order.

In all of these cases, the issue is how the “thing” is defined. The parameters that define the commodity are deliberately broad enough to allow for variations in manufacturing, which on the whole are necessary to allow for the evolution of products and supply chains. The problem comes when the things we value about those commodities fall within the range of what can be changed at will. That is, if our priorities are simply “X GB of RAM,” we’re golden–32gb is going to be 32gb, likewise for things on the spec sheet like frequency. You can reasonably expect those to be as advertised. But like pharmaceuticals, the actual effect in your application is not something that is guaranteed, and if your application/needs require a specific effect/characteristic of the product, you pretty much are going to have to pay extra for that.

Not saying it’s a good thing, but it is hardly unusual, if frustrating.

The dual/single rank kind of came out during the newest ryzen 5XXX benchmarks, when reviews starting seeing some large discrepancies in results. Took them a week or more to get it sorted out if I remember correctly.

I was following that at the time so when I built my machine I wasn’t caught by surprise but yeah… building PCs have lots of edge cases if you want the absolute best performance.

Though coming from a 4GB E8400 system it seems a bit missing the wood for the trees.

… then you’ll be SUPER pissed when you go to buy SSDs and find undocumented controller changes that have slowed well-reviewed devices down by one-third to one-half but kept the price the same as ever.

But I’m not discounting your experience. It is absolutely maddening.


Now is literally the worst time to build.

Yeah, on the alienware machine I got, I got 16Gb of RAM. I do wonder now what kind I got.

This Linus video seems well timed…

Yeah, it was a year ago too. Now it’s actually better. And it doesn’t look it will be any better soon.

Me using a dual core E8400 with 4Gb of ram isn’t hyperbole.

The funny thing is: this PC has been on, 16 hours a day, since August 2008. That’s 13 YEARS. I think I broke some law of physics.

It’s even weirder because it was the only time I used the intel stock fan. But it’s probably offsets by a cheap case that had a GIANT 27cm fan on the side.

I’m also still rocking 1280x960.

One trick to make this usable has been turning off all windows update services and the antivirus.

But some things blew up. I had to replace the two hard drives (but they broke slowly so I didn’t lose anything), the video card, and the PSU.

For those curious, the new build and my own weird choices are:

  • Case Asus TUF-501
  • PSU Corsair RM850x (the 2021 version that isn’t as good as the old 2018)
  • Motherboard Asus z490-E
  • i9 10-900kf
  • cpu fan Noctua NH-D15
  • 32Gb ram Crucial Ballistix 3600 c16, gimped single rank version
  • 1Tb nvme Samsung Evo plus
  • Western Digital Red Pro 4Tb WD4003FFBX, 256Mb cache + 7200rpm

The RAM single rank irritates me because I wanted the motherboard + cpu + ram combo be at its best, and it won’t be.

On air for ease of use. The interesting part was making it fit. The case has space for 180mm cooler, the Noctua is 165mm, but the only (cursed) ram I could find was the Ballistix, and it’s 39mm tall compared to a 32mm standard, so +7mm. That means I need to offset the cpu fan higher, of those (at least) 7mm, making the thing 172mm. I still have 8mm before reaching the side, but of course I still need some mm of space between each. So it was a tight fit. Now I have the case and it looks like there’s some more space than in the specs, so I should be fine.

Other problems though. Because things are never made smartly. This case has the top basically bolted in. It means it cannot be removed easily. Even if I tried to remove it, the screws are ON THE INSIDE. That means that if I remove the top to assemble it, then I cannot put it back on because I need the space to screw it back on.

My worry is that the Noctua is extremely bulky. One problem is the CPU power cables that go at the top left corner. I guess I could connect the cables, and push them through the hole when putting in the motherboard. But I still also need to screw in the motherboard and with the top closed and the huge CPU heat sink in the way I don’t know if I’ll have enough space for the screw at the top left corner… And I don’t think I can easily assemble the CPU heat sink after I installed the motherboard, because it requires an insane pressure when you push it down, and I always worry the flimsy (and now open) motherboard holder is going to bend horribly.

The PSU is the 2021 model, some reviews seem fine, but they decided to replace one PCI express cable with a CPU one. Consider my z490-E should be relatively beefy. It only still uses 1 CPU cable and half. Now I have three cables. BUT, only 2 pci express. One thing I learned is that when connecting video cards you should use one separate cables for EVERY connector, and videocards that have 2-3 connectors may behave WORSE. Why? Because if the load is spread evenly between the three connectors, you end up with 1 (1+1). Where the second cable supporting two connectors has to bear twice the load constantly, even at lower power draw. The 2018 (white) version of the PSU is virtually identical, more silent, and has three pci-exp cables and two CPU ones, as it should be…

The only partially good thing I learned is that you can buy another cable, so the problem is bypassable since PSUs have connections that are agnostic when it comes to CPU/GPU connectors. So if I buy another pci-exp I can have 2 CPU + 3 GPU cables, which is what the default should be.

The Western Digital old school drive is the only 4Tb with 7200rpm I could find. But it’s generally meant for NAS and RAID. I’m only using it as a single desktop drive, hopefully it stays reliable (knowing about issues with a thing called TLR and similar). This was another thing of obscure internal specs, but older. WD didn’t disclose up to a certain point if its drives were SMR or CMR. What you want for decent performance over time is CMR. The red Pro is CMR, so it’s what I got.

Another thing I left out is that my single rank RAM is likely “better”, than the dual rank module. In this OC guide it’s written that the Micron 16Gb Rev. B (I think this is the one I got) is the second best tier after the famous Samsung 8Gb B-die. And so above the Micron E-die that is generally used for dual rank.

TL;DR, single rank memory sticks “should” overclock better, to the point they might equal or even surpass in performance the dual rank sticks. But that requires going ABOVE the XMP default profiles and fine tune every single timing. That requires a fuckload of time and expertise, and also going through an endless chain of blue screens, since that’s the only way to find the perfect values.

So with single rank, either you enjoy be the geek who does deep overclock and fine tuning (while also having luck with silicon lottery), or dual rank will be much, MUCH better in performance at the official specs.