PC CPUs seem kind of stagnant

I’ve been encoding some videos recently, which is annoyingly time consuming. I bought the parts for this machine in late 2010, so I thought things had probably advanced a pace. They haven’t. The i7-4770K, for example, is maybe 50% faster than this machine’s i7-950. My usual rule is that I don’t bother with a CPU upgrade unless I’m at least doubling performance. The soon to be released i7-4790k looks to be under that threshold as well.

That’s just, sort of surprising to me. 4+ years to double performance is a far cry from the rule-of-thumb 18 months of yesteryear.

It’s because performance isn’t a big selling point right now. CPUs have been way more powerful enough for the consumer for years. Where the money’s at is in mobile devices (smartphones, tablets), and Intel is worried about ARM. So the race is now on power consumption as well as consolidating as much of a system as possible on a single chip. Intel’s been using die shrinks and instead of using that space to add more cores, they’re beefing up the graphics power of the CPU as well as trying to make it more power efficient than the previous chip.

Intel’s going to throw a bone to enthusiasts this autumn with Haswell E. Broadwell is basically late, and it’s not going to be a huge jump in performance for the reasons sited above. So Haswell-E is basically one more generation of Haswell chips, but they’re going to make an 8 core version, as in 8 physical cores, 16 virtual cores, as well as a 6/12 core version. Granted, the TDP will rise accordingly, as well. 130-140 watts, from the 4770’s 84. (Haswell E will also support DDR4.)

I complained last year about how GPU/CPU assisted transcode hasn’t advanced any–even Intel QuickSync hasn’t caught on with support. CPU speeds don’t double every 1-2 years anymore or even 50%. It’s 10-20% each year over the last with same big price premiums each year. $300-400 for a midrange quad-core i5 that’s 10-20% faster than last year’s. still no full support for quicksync in handbrake.

I think we can all thank Intel for bribing OEMs and retailers not to offer AMD products back when Athlon was king of the hill. This ensured AMD couldn’t reap the financial rewards for their performance wins and reinvest in R&D and fabrication advances that would have allowed them to stay competitive with Intel into this decade. That would have forced greater performance advances and better pricing for everyone.

I don’t think it’s simply lack of competition, or not a selling point. We really are at the end of Moore’s law in practical terms.

I don’t think it’s simply lack of competition, or not a selling point. We really bumping at the edge of Moore’s law in practical terms.

Dark Silicon and the End of Multicore Scaling ftp://ftp.cs.utexas.edu/pub/dburger/papers/ISCA11.pdf

Yup, there’s that, too. It’s one reason why it’s so difficult to overclock Haswell and other recent Core chips. We’re nearing the absolute limits of silicon.

It’s also why Broadwell is late. It used to be die shrinks provided easy cost savings (you could squeeze a lot more chips per die than previous generations), but the R&D costs and infrastructure required to get to below 14nm and below are tremendous, and it only gets harder from this point on. There’s a point under 10 nm where quantum tunneling kicks in. and once that happens, silicon is fucked. Unless there’s a completely new, revolutionary material ready by then, the only choice is to start making bigger chips or layering them.

His CPU doesn’t support quicksync. For encoding video specifically, upgrading would actually make a lot of sense.

Would it? Last I looked -all- of the GPU encoders were garbage for high quality encoding. Sure, if you just want a quick encode you can get something decent, but if you want to really squeeze the compression tech to get imperceptible differences at 20% of the size of the raw stream, for example, the GPU paths are crap.

Yeah, it’s not a lack of competition, it’s a lack of demand. The demand is in creating smaller chips that require lower power without sacrificing speed - speed stays the same, and the minimized processes are used for lowering power requirements.

Personally, I’m fine with it. I love the current Bay Trail chips that are basically fast enough for most light consumer workloads of browsing the web and such, but can run in a tablet the size and weight of an iPad Mini with 8-10 hours of battery life too.

I guess it depends on whether you mean free encoders or others. The GPU accelerated Adobe encoder works pretty well, with good quality results. What little I’ve seen of Vegas Video’s GPU encode looks pretty good, too.

Quicksync uses the CPU in relatively recent (sandy bridge and later) intel processors, not the GPU. All accelerated encoders are lower quality than non-accelerated, but quicksync is in its third generation now and supposedly they addressed a lot of those quality concerns.

If you’re concerned about “imperceptible” improvements, accelerated encoding is probably not for you.

It seems to me gaming is at a plateau right now, the art assets and coding required to get games to really use the current high-end PC hardware are so cost prohibitive that even most AAA games aren’t pulling this stuff off. At this point, only modders are really taking advantage of full on PC gaming rigs.

That’s why I dropped a bunch of money on one though - it will be YEARS before I have to even think of upgrading. It’s a good time to invest in a nice gaming rig. :)

Oh, I don’t know about that. Thanks to apple’s push for high DPI displays and everybody else running to catch up, 4k monitors are affordable this year (astonishingly) and will be cheap in late 2015. You need serious graphics horsepower to play games at 4k. Those GPUs cost $600+ today and equivalent speeds will probably be $300 next year. GPU workloads are inherently suited to massively parallel solutions, so GPUs aren’t hitting the same ceiling we’ve seen in CPUs-- and probably never will.

You won’t need to upgrade your CPU for the forseeable future, though. I mean, at some point your current rig will break and you’ll get a new one. It’ll be cheaper and theoretically faster. But performance in games won’t push that earlier.

It’s mostly non-game tasks where I care about CPU power, but I’m something of an outlier that way. Most people are doing pretty undemanding things like browsing when not gaming.

So, to be a pedant, Moore’s law is actually about the number of transistors every 18 months and not speed. They used to go hand in hand but now they just use the extra transistors for everything from gpus to more cores. Is Moore’s law actually dead now? I mean I believe the speed is if you discount multithreaded programs, but transistors? It seems obvious its going to hit a screeching halt very soon but has it already?

My 2500K is approaching 4 years, and considering how much faster it is than Jaguar in consoles, and the advent of Mantle/DX12, I am confident it will last another 5 years no problem.

There are some constraints to GPU growth. I’ve heard that the GPU makers are struggling to get to 20nm. The GeForce Titan is a massive chip, and the GPU makers are hitting real physical limits to chip size and yield at 28nm. And 20nm will cost substantially more. Everyone trying to get to 20nm acknowledge that chip prices may actually go up with 20nm, where the trend has been downward until 28nm.

Of course, they’ll go the multichip route, as AMD just did with the R9 295. But that architecture poses its own set of limits; not everyone wants a 1KW PSU.

Moore’s Law has always been about transistors. Speed (or, more accurately, performance) was a byproduct of the fact that you were doubling the number of transistors. A Pentium I chip back in 1993 had 3.1 million transistors. The Xbox One’s CPU/GPU has 5 billion. More transistors means the more things that the chip can do.

Moore’s Law has held up remarkably well for decades, but the main problem is that we nearing the absolute limits of silicon, in terms of the laws of the physical universe. It’s somewhere around 6-8 nm (Broadwell is 14 nm) that the transistor gates will be so damn tiny that the electrons will quantum teleport through them, and once that happens, the chip is worthless. The only way to improve performance at that point will be to build bigger and bigger chips, but this has a huge issue in terms of heat and power usage.

There are exotic technologies that everyone is trying to research to replace silicon, but there’s nothing I know of that will be ready for primetime by the time we hit the silicon wall.

Gus if your computer is from 2010 you may not have an SSD, which would make a far bigger impact on usability/performance than a faster CPUs. If you do never mind.