Intel's stagnating performance is not in your head

So, Kaby Lake launches today. I built my current rig in February 2012 with a 2500k. The system still performs quite well for my purposes (the video card has been updated a couple of times and now sports a 1070) but I think we are finally reaching the point where it would make sense to think about a rebuild or new build if I wanted.

That would actually be a tough choice because I love my current system and how quiet it is, but it limits me to impeller-style gpus, which have become increasingly difficult to find. On the flip side, I invested a lot of money in the case which is responsible for both the low noise and that limitation (Silverstone FT-02).

Heck, all I’d really need would be new cpu, mobo, and memory. Maybe a new cpu cooler (not sure)?

I lol’ed

If you’re still rocking an older Ivy Bridge or Haswell processor and weren’t convinced to upgrade to Skylake, there’s little reason to upgrade to Kaby Lake. Even Sandy Bridge users may want to consider other upgrades first, such as a new SSD or graphics card. The first Sandy Bridge parts were released six years ago, in January 2011.

Again. Compare iPhone 4 performance (2010) with iPhone 7 performance (2016). Virtually the same exact time range.

If you don’t overclock, I agree it’s very underwhelming. Kaby Lake commonly overclocks to 5.0Ghz while Skylake commonly hits 4.5. Also the base clocks are a bit higher.

Intel moved from tick/tock to process/architecture/optimization, and Kaby Lake is the optimization stage. So we already got the benefits of the 14nm process in Broadwell, and the brand new architecture in Skylake. Kaby Lake is a refinement of Skylake. Don’t expect much from it.

The next chip, codenamed Coffee Lake, will essentially be Kaby Lake on a 10nm process. So expect it to clock faster and use less power, but offer no additional features and a marginal IPC increase at best. The next truly sexy CPU from Intel, the next Skylake, is Cannonlake, due 2019.

AMD? The king of paper launches? Anandtech is basically the marketing arm of AMD (no pun) at this point, publishing paper launches comparing mediocre CPUs to themselves and pretending treading water is progress.

Apple needs to be courageous and out their ARM CPUs in a desktop OS and form factor, or come up with a desktop dock form factor for their phones.

Anandtech had a better summary, and it’s not as dismal as Ars Technica’s.

Absolutely. By iPhone 8 there is no question they will be outperforming lower end core i3 chips.

It’s more than that though it’s that like you said they’re already there. We don’t “need” AMD to save computing. A non-x86 competitor is already here. We just need the lead ship currently mired in fearful complacency to be brave enough to challenge the status quo. Which, of course, they will certainly not do under Tim Cook.

Let’s see, the iPhone 4 had the A4 which was a 32 bit SoC on a f’ing 45nm process. iPhone 7 has the A10 which is a 64 bit SoC on a 16 nm process. This isn’t some kind of wizard magic that Apple had that Intel didn’t, they are just following Intel down the path to smaller processes. But now they are running up against the same barriers, but are still years behind.

Yes, but it is wizard magic that is apparently far beyond the abilities of Qualcomm.

We need someone to just nuke Qualcomm from orbit.

My i3 from 2011 still runs everything perfectly. But I did note that this new i3 7350 runs at 4.2Ghz, the same as the i7-7700. So could be tempted to get the same (effective) performance as the i7 for $157, and be good for another 5 years, and my total CPU outlay for a full decade would be $267.

The names Kaby Lake and Coffeelake are too dumb to put into my computer. I could certainly do Cannonlake, though. That’s the name of a strong CPU.

Pentium 4s did 3.8 Ghz 15 years ago :). Granted, they would set fire to your house whilst doing so, but they did it! They also had terrible clock management with barely any OS support for that kind of thing, so they were all constantly at 100% clock speed. (Given the advances in power and clock management, I would guess that most people computers, i.e. desktop, laptops, phones) spend more time at lower clock speeds than they did when people ran Pentium 4s, and still smoke them in performance!)

The problem with upping past 5Ghz is that those Intel chips are really pushing the limits of physics. At 28nm there aren’t that many atoms left in a gate, so it’s very easy for the doping and stuff to not work correctly and then the gates ‘break’ very easily. It’s only of the reasons clock speed has stagnated. Also, as the chips get more transitions on them things like clock skew become even worse.

Because of these physical limits faults are more common during manufacture, so these days the CPU binning is much broader, i.e. i7s are cherry picked i5s, whereas it used to be you made a bunch of Pentiums and cherry picked the crap ones to be the lower clocked ones. It’s also one of the reasons they moved of tick/tock and onto the new 3-stage stepping to eek out more time on each manufacturing set up.

So whilst IPC has been going up massively due to multi-core/threads, architectural improvements, power management and smaller transistor sizes, CPU clock speeds have really stagnated at around 4Ghz.

Performance, however, has continued to increase :) And that’s what’s important.

(Though, it’s worth pointing out, that due to this stagnation of clock speeds, single-thread performance, which you harp on about in that qualcomm thread, has barely changed over the years. Single thread performance definitely isn’t following the classic Moore’s law trend (though that’s about transitions))

ps: I watched an interesting video the other day that’s relevant to this thread:

No. Many companies are at 7nm, 10nm in test already. Lithography and accuracy is what is holding them back.
(Disclaimer I work for Intel)

No. Binning is nothing to do with what you are describing. If you want to understand more about binning, watch this: https://www.youtube.com/watch?v=8AQPIBfIqMk

Yes! One of the coolest architectural improvements is multi-core and how you can now create several VM’s from a single processor with multiple cores. I have yet to dip into this, but have several friends who have done this with startling results.

I’d have to agree that processor speed is not the determining factor anymore. For example, electricity consumption is so huge that big vendors will completely replace their entire server farms with every generation because of the electricity cost improvements from one generation to another are so dramatic.

And, if you want to believe the marketing, you can now run Overwatch on a Laptop with the latest Kaby Lake processors without a dedicated GPU, so Intel’s video processing continues to eke out little improvements for the casual gamers.

Yep, intel integrated graphics have basically destroyed the once fairly lucrative <$100 discrete GPU market. You can play WoW, LoL, DoTA2, Overwatch, etc, on integrated graphics. Not at 60fps and 1080p, but you can play them.

Not intel, but I saw something briefly about Qualcomm’s new 835 processors which are slated to be used in a bunch of upcoming VR and AR devices, and seemed interesting.

Not exactly.

Many emulators are often bound by single thread CPU performance, and general reports tended to suggest that Haswell provided a significant boost to emulator performance. This benchmark runs a Wii program that raytraces a complex 3D scene inside the Dolphin Wii emulator. Performance on this benchmark is a good proxy of the speed of Dolphin CPU emulation, which is an intensive single core task using most aspects of a CPU. Results are given in minutes, where the Wii itself scores 17.53 minutes.

Comparing the apples-to-apples K models:

  • Kaby Lake 6.02
  • Skylake 6.47
  • Haswell 7.63
  • Ivy Bridge 10.67
  • Sandy Bridge 11.57

You’re looking at 2x improvement in single thread performance from 2011 to 2016. Is that “barely changed over the years”? I think not.

Granted it definitely pales compared to the 10x improvement in overall performance on mobile (well, Apple mobile, anyway) from 2010 to 2016… but it ain’t exactly chopped liver, either.

I did the math. Overall cumulative Dolphin (single thread perf) difference from Sandy on far right, relative difference for each generation in the middle.

Haswell was the biggest single thread perf win, but Skylake was also substantial.

Does that SKU of Skylake include AVX512? The most interesting things about Haswell and Skylake, from an ISA perspective, were AVX2 and AVX512, respectively. If Dolphin has substantial vectorized code that’s tuned for each architecture, that would line up. My recollection is that it does, but I haven’t looked closely, so grain of salt and all that.

(why don’t images show up when quoting someone…?)

Anyway, on CPU performance, no question there has been improvement, but outside of server farms that want 90-100% CPU utilization, I think the average person, and even gamer, could allocate 2 cores out of a current I-7 6700K (it comes with 4 cores) and not see any difference at all - providing they are using an external GPU. Browsing, watching movies, streaming, 95% of most games, I don’t think you’ll see any marketable difference.

Image editing / Video editing and other compute areas you will likely see a difference in performance.

Yes, emulators of recent console systems (and Wii is relatively recent, Dolphin emulates both Gamecube and Wii) will definitely be CPU intensive. And they are single threaded intensive, like most tasks tend to be.