Intel's stagnating performance is not in your head

Now this guy’s shtick is not my favorite, and i’m sure there are some methodological complaints here that could be made, but basically he dug out 10 years worth of CPUs and measured their performance over time. It’s worth pointing out he’s using the extreme version of multi-core server CPUs which is a huge caveat, imo. But he showed that single core performance has stagnated on Intel CPUs since 2011, and that why aside from USB-3 and ect support your old i7 2600K is probably just as good as the new ones in most applications. I know that i still am forced to use 32-bit single threaded applications in my business on a daily basis, and that multicore performance simply doesn’t matter on that particular program.

I don’t think this is a surprise to the geek population.

Luckily there’s still some room for advancements in GPUs and mobile CPUs. But maybe not too much. At that point we’ll really be stuck until a major tech improvement comes along.

Diego

Intel has been very focused on power consumption for both power draw and thermal characteristics (in particular not needing large heatsinks) to allow for super thin laptops with long battery lives. They have been super successful on that front.

In addition, on the xeon side they have made huge improvements on the number of cores per cpu which does make a difference to people who care about rack space or have parallel friendly work flows.

It’s not like they aren’t innovating still, they just have other focuses.

That said, if you really care about single core speed, the 7700K would be about a 30-35% improvement.

https://www.cpubenchmark.net/singleThread.html

Yeah going from 1000 to 2000 in that chart spans about 5-7 years depending on what CPU model you choose, so this guy’s argument is kinda bunk.

(And wow that i7-7700k number is incredible, if true. But I am skeptical because http://www.pcgamesn.com/intel/intel-kaby-lake-benchmarks)

  • Are we reaching diminishing returns on CPU speed? Absolutely, since about 2009 when dual cores entered the mainstream.

  • Can you double performance going from a 2010 CPU to today? Depends on the application, but yes, if you are doing something performance intensive. The real root cause is we have such an obscene amount of computing power on tap that “performance intensive” gets to be a narrower and narrower slice of the top of the pyramid every year:

https://blog.codinghorror.com/the-pc-is-over/

Intel hasn’t been able to significantly increase clock speeds for several years due to thermal limits. Per-core performance gains via architectural improvements are apparently much harder to achieve than by simply turning up the wick.

There has been more of a clock bump than you would think:

3.8 turbo vs 4.2 turbo does not sound like a huge difference, but the newer CPU spends more time at peak turbo speeds, thanks to Speed Shift and other architectural improvements.

Also put turbo aside and consider the base clock speed difference – 3.4 vs 4.0 which is pretty sizable. And the base clock speed for the upcoming i7-7700k is 4.2, with a turbo of 4.5!

Speed Shift was integrated into Windows 10 in November, right?

This is kinda the crux of it. The flat reality is that in order to create an application that needs more power, generally, requires more resources to create. Hell, as we are all gamers, this is pretty obvious. Games with higher system requirements, visual fidelity, and all those fancy buzzwords just flat out cost more*. So as the cost to develop bleeding edge increases, the incentives to strike at that decrease. It’s why a lot of things don’t utilize multi core performance.

So adding more speed and power to CPUs will, generally, increase the cost to push programs that max them out, which in turn creates less incentives to be there, which means less demand for higher power CPUs, and the result is the mass realization that power is generally ‘good enough’ that development efforts are better spent elsewhere, with some going here.

And, personally, I can’t fault them for focusing on power consumption, heat dissipation, and making top line cards more mobile friendly. It makes sense, and is a net win. It’s not for nothing that I love my Surface Pro tablet, after all.

*obvious caveats can apply. So don’t go quoting Witcher 3 at me people!

IPC increases 5-10% every generation and clocks creep higher too, while consuming the same amount of power, generating less heat, and thus remaining at higher clocks more of the time. Single generational leaps are incremental, but upgrading every 4 years offers a nice leap in performance-- just not one that’s perceptible doing desktop stuff.

It’s completely obvious that CPU performance is increasing slowly. The days of moving from a 486DX-33 to a 486DX4-100 and literally tripling your performance in a year are well behind us. But it is still increasing.

Regular doubling of perf is still happening on mobile, just not on desktop.

(Well, except for Android / Qualcomm from 2013 onward.)

Edit: Smooches, wumpus.

Not at all. Anand benched the A10 as 32% faster than the A9, which was itself 68% faster than the A8. Seventy percent is waaaaaay more than the 5-10% we get on desktops but not a doubling. And 30% is much further still, indicating we may be hitting a soft threshold on mobile as well.

Since the A10 essentially offers desktop performance (until it thermal throttles), that shouldn’t be particularly surprising.

Ya, this is the deal. The chips aren’t getting faster, but they’ve gotten way more efficient.

I think you misread what I wrote. Did I say “yearly”? Did I say “every release?” Nope.

Consider this: the iPhone 7 (2016) is literally ten times faster than the iPhone 4 (2010). Not on synthetics – in actual measured real world perf.

Now compare a 2010 Intel CPU with a 2016 Intel CPU in real world measured perf.

I did misread it as yearly.

Anyway, performance velocity appears to be slowing. 2010 was the wild west, the 386SX-16, while 2016 is a fairly mature staid Core2Duo. I wouldn’t expect any more 70% yearly performance gains simply because mobile chips have essentially caught up to desktops.

Well, Apple mobile CPUs have caught up to desktops – almost. Certainly by the iPhone 8 they will in every meaningful sense.

Qualcomm not so much.

Apple is the Intel of mobile. If you want to talk about Qualcomm, compare them to AMD.

(Ryzen is looking promising. But AMD is known to, ummm… exaggerate their performance claims.)

Qualcomm is not even at AMD levels. The 2016 Google Pixel is slower than the 2015 Nexus 6p at many tasks…

Oh for God’s sake stusser, don’t encourage his inane Qualcomm ramblings in yet another thread.