Apple CPU vs. Intel CPU.. Fight!

The apple watch runs iOS too, just slower, but it’s an ARM chip. So no it isn’t that.

I guess they could be taking advantage of economies of scale. They don’t sell many apple watches.

Might be an attempt to make hackintosh systems harder to setup. Seems like overkill though.

I wonder if FaceID would use the A10x chip and that is why they are using it.

Let’s see if Intel or anyone else reacts to this

https://www.anandtech.com/show/12119/microsoft-launches-windows-10-on-arm-always-connected-pcs

I can’t think of any hardware I’m less excited about than a fucking crapdragon, but running Windows x86 on ARM continues to be interesting!

Nah. Not if they start at $600. It’s a complete non-starter, particularly if they ever charge for that Windows S to full-fat upgrade.

An interesting retrospective

https://www.bloomberg.com/graphics/2018-apple-custom-chips/

It’s incorrect to say that the A10 contained the first custom Apple designed graphics chip. e.g. They were substituting out parts of the GPUs with their own blocks since the A8 and possibly even the A7? It’s hard for me to remember which apple chip contains what IMG part. And the A10 still contains chunks of IMG’s IP. The A11 probably will as well?

The line was always blurry about what parts of the GPU were Apple’s and what parts were IMG, but the trend was definitely heading towards Apples. e.g. They were replacing the entire shader engines – arguably the most important part of the GPU – by the A9 (I think it was A9).

I imagine it’s a similar situation with every other piece of IP, ever since Apple first bought that layout place. I just spent a while trying to find a prophetic news/gossip article I read many years ago that was talking about Apple’s purchase of P.A. Semi. and predicted the current situation where Apple now design everything in their cores.

That’s nonsense. If a task is completed twice as fast at double the system power usage, it isn’t more efficient simply because it “gets back to idle quicker”.

Task power efficiency is simply total energy input to perform the task. As frequency (and thus voltage) increases, processors become less power efficient.

There are countless articles that show the difference in overall system efficiency (perf per watt) of desktop and laptop CPUs when under/overclocked e.g. https://www.kitguru.net/components/leo-waldock/intel-core-i9-7980xe-extreme-edition-18-cores-of-overclocked-cpu-madness/6/

If checking an email takes 30 seconds of screen on time, or replying to a whatsapp 45, shortening the 30seconds to 29.7 seconds because a faster processor enabled the task to complete in 0.3 seconds less time is incredibly unlikely to outweigh the added power inefficiency.

It’s why battery saving kernels in android can deliver significantly better Screen-On Time even though every process the system performs is lengthened.

If this is true, does that mean that one can’t design a processor that is faster at the same watts of power drawn? I feel like old i7 processors (I’m rocking a i920, still!) are slower than far more modern processors, but draw the same watts of power.

Core i7-920 Bloomfield (45 nm) 2.667 GHz 130 W
Core i7-4770K Haswell (22 nm) 3.5 GHz @ 4 Cores 84 W

How can this be?

https://en.wikipedia.org/wiki/List_of_CPU_power_dissipation_figures#Intel_Core_i7

The 45nm and 22nm parts in brackets are quite important, and CPU design makes a difference too.

The point is, when ruling out any differences in CPU design, memory, different display technology / whatever, by undervolting and underclocking a CPU you can increase performance per watt, while the opposite has been proven to be untrue.

In fact, it is. See

Specifically this chart I made, from that post. The reason people think this is true, is because some vendors put absurdly large batteries on some Android devices.

Note how on an efficiency per watt basis the iPhone 7 is off the charts, and also radically faster:

An iPhone 7 with a 3200 mAh battery would DESTROY these Android battery life scores. I mean utterly fucking demolish them. 3200 / 1960 = 1.63 and 1.63 × 9.22 = 15.02. That’s right, just over fifteen hours of battery life. And all that, while being 2x-3x as fast. Sad!

You could at least pretend to read a post before replying with unadulterated crap

I never said than a10 was more or less power efficient than any other processor, I said that completing a task quicker is not automatically more efficient (and generally, efficiency scales badly when frequency starts to ramp up).

What exactly does a10 performance vs a snapdragon anything have to do with scaling of absolute performance vs performance efficiency?

Oh right, it has nothing to do with it, you’re just fundamentally incapable of taking in any new information unless it agrees with whatever your idiotic opinion of the month is.

You argued that absolute performance is automatically more efficient because it completes a task quicker.

That is empirically untrue in many tests on laptop and desktop CPUs, and you haven’t posted any reason why CPUs in smartphones would somehow be immune to the efficiency loss inherent in ramping up voltages to maximise performance. Instead, to paraphrase,

“LOOK AT THESE BENCHMARKS APPLE IS FASTER THUS FASTER CPUS ARE MORE POWER EFFICIENT EVEN THOUGH THE GRAPHS ONLY MEASURE OUTRIGHT SPEED”

I can’t even find the words to describe just how painfully illogical your argument is. Unless you measure an Apple SoC at <freq 1> vs <freq 2> and find the difference in performance per watt at the different frequencies, you simply can’t argue that faster is better because one smartphone that runs a completely different architecture, SoC and operating system is faster than another.

Even worse, you don’t understand how a web browsing battery life test is pretty much the worst you could pick for testing CPU efficiency as it isn’t particularly CPU intensive and is mostly down to display technology and the WiFi implementation on the SoC. Websites that actually have an elementary school understanding of basic scientific principles, instead create a benchmark that runs offscreen and run it until the phone dies. They then work out the (performance * battery life) and divide that by the battery capacity to get a very rough proxy for efficiency across different phone architectures.

Of course, that doesn’t matter because you’re only going to reply with more irrelevant guff that shows you have a desperate level of ignorance on anything beyond APPLE GOOOOOOD or QUALCOMM BADDDD.

In fact it is, if you look at the data above. Because “quicker” CPUs and SOCs typically use newer and smaller processes (30nm, 22nm, 14nm, etc).

Yes, if you come up with some arbitrary nonsense like BUT HEY MAN WHAT IF IT IS USING A BILLION TIMES THE POWER, then perhaps you have… come up with some arbitrary nonsense. So, uh, congratulations on your arbitrary nonsense? I’ll be over here looking at actual data.

Also:

Throughout all of the power optimizations mentioned so far … has been the concept of race to idle or race to sleep. The idea is simple enough: get the job done as soon as you can and then get the CPU back to an idle state. This is an effective strategy because processors operate on a voltage/frequency curve, resulting in higher power consumption at higher frequencies, and conversely power consumption quickly drops at lower frequencies.

Look kedaha, those low power cores in the A11 he loves so much exist for no purpose at all. Things like P-States and C-States are just something made up to give hardware engineers something to do. I mean, it’s a fundamental law that raising voltage causes more leakage from the transistors and lowering it lowers the leakage which saves power, but hey look an unrelated chart!

It’s like a sports conversation where the other person keeps talking like RBIs are the only important stat except you’re discussing soccer.

As anandtech says, it is also a fundamental law that the faster you can get to idle, the more power you save.

Throughout all of the power optimizations mentioned so far … has been the concept of race to idle or race to sleep. The idea is simple enough: get the job done as soon as you can and then get the CPU back to an idle state. This is an effective strategy because processors operate on a voltage/frequency curve, resulting in higher power consumption at higher frequencies, and conversely power consumption quickly drops at lower frequencies.

I think a big part of it is that he just doesn’t really understand how complicated modern work loads are. Obvious stuff like memory fetch time scales just doesn’t really sink in. So this myopic religious idea that a single core’s performance is the only thing that actually matters has taken hold and basic obvious facts like the hardware he praises so much actually includes the things he thinks are so useless (like low power cores) are something he literally can’t acknowledge. To the point that he will just literally fail to read their presence around them and cherry pick quotes that support his very limited surface level view of things not understanding that they don’t really support his view like he thinks they are. It’s actually kind of fascinating to watch when it isn’t maddening.

Single threaded perf is, by far and without question, the number one thing that determines overall performance for the average user on an average computing device. If this truth is uncomfortable to you, I suggest you look inside yourself and ask why that is – and who or what your allegiances are to. I fight for the users, and I am a fan of computers getting cheaper and faster all the time. That is the natural order of things.

Yes, if you are encoding videos all day long, compiling C++ code all day long, or playing games all day long (or even worse, mining cryptocurrency all day long), you will have different needs. I never said otherwise. But these are rare use cases, and the people who have them tend to know what they need.

You need to substantiate your assertion a bit. I’m not aware of any modern single-threaded applications; everything is engineered to take advantage of multiple threads and therefore multiple cores. Are you confusing the monolithic thread model of your beloved JavaScript with the environments in which it’s presented, namely browsers?

Wumpus is basically correct. Modern applications are multi-threaded, sure; but the issue tends to be that there is a critical path that is processed by a single thread.

Think of it as a project schedule to construct a new building. The concrete for each floor takes time to dry, and you can’t build a floor on top of a wet, unset base. The processing time to lay a floor and wait for it to dry dominates the schedule. Adding crew to offload tasks can help, a bit; electrical, finishing lower floors etc, but it doesn’t help much.

But Wumpus is facing an uphill battle. We all read reviews that blatantly benchmark in ways the vast majority of us never follow through with. Cinebench? 3D Studio Max? Ok then. The reviews then show us that even with these silly benchmarks, it still doesn’t matter much. But what do we all do? Go and buy the i7 of course!