I said 28nm just because it's the last thing I worked on and saw a picture of. I could literally count the blobs on the picture. Well, at least I were told the speckly blobs were atoms. The exact picture I was shown won't be on the internet, and I can't find a decent one right now.
I know nothing about materials science, however, so the speckles could have just been noise in the image :P Threads like this seem to indicate we're definitely into the "countable" range of atoms in our gates! If you need all of your gates to be 28 atoms wide, and one of them is accidentally made at 24, then a normal voltage is now an over-voltage and pop, there's now a short in that transistor every time it's used.
edit: Also, when I was at the University of Manchester 10 years ago, the lead silicon guy and the graphene guy showed us a picture, and the bit between the gate and the channel was 2 atoms wide or something absurd. And after graduation a mate of mine ended up working in the graphene team and he used to send me pictures from the electron microscope where gain, I was told that the blobs I could see were individual atoms and I was shocked at how few there were. The wiki page for 5nm 'backs up' this -- all of the examples contain transistors with atoms countable in the tens! (It still blows my mind really, to think about things operating at that scale). It makes sense, given that an Angstrom is 1/10th of a nanometer and that an Angstrom is the unit used to measure atoms. In a 14nm chips there really isn't a lot in each transistor!
Multi-edit: Though doing maths like that is probably wrong. I was under the impression that 14/10/7nm weren't actually 14/10/7nm in the same way that 90nm was 90nm -- i.e. the minimum gap between the important bits, but a bunch of marketing spin designed to make them sound smaller. Still: things are tiny and definitely scraping at the edges of physics!
Out of interest, you say many companies. I was aware of Samsung and Intel -- which other fabs go this low?
I thought I knew what binning was, but I might be wrong. I mainly knew it from when I worked in the GPU industry. So I've watched that video, hoping to learn, but it's exactly what I described? e.g. "this i3 is identical die layout to an i5, but it failed all of its tests so those faulty modules were disabled and it was labelled an i3"
I'm not sure what I'm missing? :)
(What I'm probably wrong about: I was under the impression that modern silicon products are more similar to each other and are more aggressively binned than in the past, e.g. 15 years ago, where different products had intentional architectural differences, and the binning sorted them into the shit and good editions. So much so that 'design for binning' is more intentional now than e.g. 15 years ago.)
You're right that a 2x performance isn't stagnation. Really, it's the gains in single threaded performance that have stagnated, compared to overall performance. e.g.:
The rate of gain in single threaded performance is a crawl! Here's an interesting article with some pretty graphs (watch out, they're all log graphs).
Also, I've not read that article, so I don't know if that's simply running Dolphin or running a specific Dolphin single-threaded benchmark. But I don't really believe that Dolphin is completely single thread bound. It might be heavily single-threaded, but even if it does a tiny bit of naturally parrellisable work then suddenly the core count matters and the graphs get skewed.
I was going to look up dhrystone and whetstone benchmarks when replying to Wumpus, but last time I did that they were also parallelised/vectorised/threaded up the wazoo and therefore confusing and no longer a simple test of single-threadedness.