Apple CPU vs. Intel CPU.. Fight!

I second the opinion.

I got my mind blown away by our first production Nodejs conversion of a traditional Java backend. It highlighted the huge gains that can be made when we identified the fact that IO was the main blocker and started designing code to use the non-blocking nature of Node. It allowed us to massively both reduce the user response times AND greatly increase the concurrent users we could serve with the same CPU and RAM that is unthinkable in Java.

We came away awed by the fact that the race to idle design of the code allowed a single thread to outperform what we used to do with multiple threads by about 2 orders of magnitude! And with a more predictable usage of RAM and CPU cycles!

It’s fairly trivial to see how our modern OS handles tasks in our own computers. Just open up the Activity Monitor or Task Manager and see the spikes of CPU activities while we are doing our tasks. The CPU time is idle most of the time (in fact the highest usage will be the monitoring itself!), and that is where the power savings are made, complete the task extremely quickly and race to idle, letting the rest of the subsystems do their jobs.

This is not contrary to what other people are discussing. Conflating current desktop CPUs with low power mobile SoCs is specifically a major factor in why both sides seem to talk past each other so much. While they share a lot of things there are a lot of differences that makes it an apples and oranges comparison. We’re also discussing system performance and not single application performance.

It’s like comparing an FPGA to a CPU on clock speed and thinking the CPU is the better option for everything since it has 10x the clock speed despite the fact that an FPGA solution can crush a CPU solution for the right workload. One of them can turn a packet around in a few usec and one can do it in a dozen nanoseconds in the right applications and the fast one has the crap clock speed.

This idea that all that matters in the mobile world is the single core speed of the fastest core because all that matters is the race to zero falls apart the second one realizes that they literally include lower power cores for overall power efficiency on the same SoC. What is the point of those cores if they are uniformly worse than the high powered ones? There’s a reason that random partial quotes from random tech writers just get endlessly spewed forth instead of any of these comments being addressed. The lower power cores are there because they work and they matter and it is a more complicated world in the mobile SoC with power constraints.

Mobile SoCs use the big/little high/low performance core pairs and desktop CPUs use uniform core speeds because they are different use cases with different realities. To claim that desktop rules of thumb apply uniformly to mobile requires first explaining why the very smart engineering departments of many companies who believe that they are different are all wrong.

Smartphones and tablets are on a direct collision course with laptops, and often desktops… and they have been for years. The day where you take your phone and plug it into a monitor and keyboard on the regular is coming.

Hell, the iPhone 7 and 8 are already faster than the majority of desktop computers out there purchased in 2014 or earlier. Last time I checked, battery life mattered to laptops, too – and there’s not a tremendous amount of difference in battery volume between today’s ultra thin laptops and a tablet anyway.

Pretty much anything the average user would want to do with a “real computer”, they can do on a phablet or tablet. Because those are also real computers, and getting substantially more real every year.

Your idea that these are special magical computers with special magical rules that apply only to them is not credible. Yes, I concede that desktops are permanently plugged into the wall and can have watt load budgets far beyond the 10w max a tablet should ever hit, but since when has this ever been about desktops?

This is a meaningless internal implementation detail. Even if they include “xxtra sweet blast power low voltage processing cores”, the only thing that matters is measured battery life under realistic loads.

So I present to you, again… measured battery life while browsing the web:

Take a good, long look. You’ll find the story has not improved since then, either.

Heh. That just makes me even less likely to get an iPhone. So much headroom to increase battery capacity and it still has shitty battery life measured in hours. Double it! They’re not alone in this, of course, but at least with Android you can at least optimise for battery capacity if you choose to.

That is a strange conclusion to reach. My experience with both iOS and Android devices show me that Apple’s devices consistently had better battery life given the same size and format. But I attribute it to the better design controls they have in place compared to the run-of-the-mill Android devices.

Even Sony was not immune to bad engineering. My Experia Z was horrendous in battery life.

Given the same size and format, maybe. But the point remains that Apple aggressively favours thinness over battery life. So do some (too many) Android manufacturers, it’s true, but the point remains if I want to buy an Android device with 50% more battery life than an iPhone 7, according to that chart, I can, because brute force. I can’t buy an iPhone with a reasonable battery life, because Apple has chosen to use its superior efficiency to shrink the size of the device instead.

To be fair, Excel on Mac didn’t take full advantage of mult-core until recently.

But to be fair, Excel is heavily user and UI dependent, and the extra cores is basically wasted waiting on the user.

Except for heavy calculations. I have a few coworkers that do heavy Excel work and ran into bottlenecks.

Ah that explains. But I shudder thinking about the code for those calculations, in Excel.

I was responding to Scott, not you. He was specifically discussing desktops. I know that you don’t read anyone else’s posts so I am not surprised that you can’t understand the concept of a conversation involving responding to the other person’s thoughts instead of puking out a one sided fevered rant.

You are handwaiving away the importance of low cores by providing a battery life chart of the IPhone 7 which has 2 low power cores. You claim that they don’t matter because jpeg and the whole mobile SoC design industry is wrong.

Edit: an even more obvious difference is the dominance of the more power efficient ARM RISC architecture in mobile over the more power hungry x64 CISC architecture that dominates desktop. To hand waive away this this things as inconsequential without any explanation on why the system designers of the world are wrong is silly.

You’re still refusing to understand that that does not show that faster CPU speeds are more battery efficient. It only shows two things and does not in any way provide evidence of a causal link between the two:

  1. iPhone 7 SoC is more efficient than competing chips

  2. iPhone 7 SoC is faster than competing chips

That you somehow believe that one implies causality in the other would have you laughed out of a classroom in any kind of an intro to science class.

You do realise that the Pastafarian argument about Pirates causing global warming is a joke right?

Also, if you had bothered to read the entire page of the anandtech article you had linked, you would have seen this graph:

So, the higher the frequency beyond the voltage floor, the less power efficient the SoC is. Savings from going to idle faster are therefore related to how long the ‘task time’ is compared to the required system-on time e.g. loading a webpage and the user then reading it for 5, 10, 45, 90seconds.

It is sad that the PowerPC and DEC Alpha chips lost out to INTEL in the CPU wars. When I was doing my comp engr degree then, I had watched in horror that the accident of economics trumped the elegance of design of the RISC architecture.

I claim the internal implementation details matter in exactly one way: in terms of real world battery life on real world tasks.

As much as you might love to believe otherwise, smartphones and tablets are nothing more than computers with batteries strapped to them, like all the other computers with batteries strapped to them that we’ve owned for the last 25 years.

Apparently this is news to Anandtech, who say over and over:

2010

Measuring idle power is important in some applications as operating system schedulers may choose to “race to idle”, i.e. perform the task as quickly as possible so the CPU can return to an idle state. This strategy is only worthwhile if the idle state consumes very little power, but lots of server applications are running at relatively low but almost never “zero” load.

2013

We’ll start out our power investigation looking at behavior at idle. Although battery life when you’re actually using your device is very important, having a fast SoC that can quickly complete tasks and race to sleep means that you need to be able to drive down to very low idle power levels to actually benefit from that performance

2015

The new way of managing this comes under the Speed Shift mantra. Intel is promoting this as a faster response vehicle to frequency requests and race to sleep by migrating the control from the operating system back down to the hardware. Based on the presentations, current implementation on P-states can take up to 30 milliseconds to adjust, whereas if they are managed by the processor, it can be reduced to ~1 millisecond.

Hey, guess where Anand Lal Shimpi went to work a few years ago? Let me give you a hint: 🍎

To be pedantic, the CPU need not be at zero voltage to enjoy power savings, even a small absolute voltage down from high power to low can lead to large relative power savings.

The speedstep design is a great innovation in that regard.

And you provide no basis for your religious belief that none of the things we are talking about matter for battery life.

It seems you don’t understand the differences between ARM/x86 architectures, why the Big.Little structure exists for mobile devices and not desktop chips and why Apple uses it in the devices that you claim it doesn’t matter in.

So instead of admitting that you don’t know or that someone in the world other than you has good ideas (apparently everyone at Apple doesn’t know wtf they are doing because why do they put those meaningless low power cores into their phones if they are worthless?) you have made it your mission to puke stupid cherry picked quotes across this forum while ignoring the rest of the articles that you quote from in a zero signal 100% noise pollution way so that those people here who have a clue what they are talking about can’t have a conversation without you spewing forth.

You still just quote random crap without actually ever addressing any of the questions asked.

Why does everyone at Apple, including (presumably) Anand Lal Shimpi think that the A10/A11 should have low power cores when they do not matter at all? Why are they all wrong and you are right?

Same reason he started and doggedly maintains a thread with an obscene, juvenile title where he exhibits the same behavior.

Jesus. You guys should all buy stock in Qualcomm if you’re so personally offended by their technical incompetency. Put your money where your mouths are.

That wasn’t the argument – the argument was “fast CPUs are worse for battery life”. Which is demonstrably false, see above quotes on race to idle.

Whether they clock down, or switch to “special” CPU cores in cases of idle and near-idle is an implementation detail. What matters is how well the particular implementation works. And in the case of Qualcomm, the answer is “not very well”. Which isn’t surprising in the least if you’ve followed their sad trail of incompetence to where we arrived today.

And then you linked to three quotes that still don’t say what you claim they say.

The first quote doesn’t say that racing to idle means that clocking a CPU faster is more power efficient, it talks about OS scheduler design and that CPUs need to be able to respond to such accordingly.

You would know that if you had the capability of actually reading and understanding words, but I’m starting to suspect you struggle at anything beyond “Qualcomm bad” “Apple good” “Discourse best”.

The second quote says that

you need to be able to drive down to very low idle power levels to actually benefit from that performance

It doesn’t say that clocking the same CPU higher is more power efficient. It just points out that completing a task quicker is pointless unless your idle power levels are very, very low.

The third quote refers to how CPU frequency responding to demand obviously benefits from being able to respond quicker and thus turn off when not needed.

It doesn’t say that faster = more power efficient.

You’re more than a bit touched.

P.S. What about those pirates eh?

And guess what nm process a newer, faster CPU is going to be on? A smaller one. Guess what happens when you move a CPU to a smaller process? It gets more power efficient.

In fact, it does. Repeatedly. Sorry if you can’t or won’t understand that.