Apple CPU vs. Intel CPU.. Fight!

Ah I see where the problem lies. I commented on the race-to-sleep design and it was taken to mean that I think that it is the only thing that matters, which is absurd of course.

Edit: I thought the line of argument that wumpus was following was this.

  1. Apple CPU had higher single chip perf compared to Qualcomm chips.
  2. This allows it to finish tasks faster and take advantage of the race-to-sleep design better.
  3. Thus it is more efficient just basing on the fact that it can utilize the design more.

That was what I though I was reading because I agreed with the argument. I was scratching my head why would anyone oppose a simple argument like that. Especially when Intel and Toshiba etc advertise their SpeedStep and Inverter technologies as more energy efficient when it first started becoming popular decades ago.

Now it makes more sense. You guys think that I take the position that it is the ONLY thing that drives efficiency. Which is patently absurd.

Edit2: It’ll be helpful to me to point out what is the post I made that gave that impression? Will be interesting to look at the language I used to convey that.

Again, this is obvious. I said repeatedly that

So yeah, if you (for whatever reason) have your device at 100% load for hours, you’re gonna have a bad time. The good news is that very very few real people in the actual world use computers that way.

That’s why I keep citing Anandtech’s web browsing test: they are a reputable source, it’s a representative test (browsing the web is a very common way to use a computer for billions of people), and it does legitimately require a lot of power these days what with the explosion of JS on the web (and badly written advertising networks).

Still strange that Anandtech didn’t review the iPhone X or 8, though. I hope they aren’t getting out of the business of smartphone reviews because that’s some of the most interesting and mainstream relevant computing that goes on these days.

It is relevant to the topic, because even an awful laptop or desktop is going to have dedicated heat dissipation solutions, thus Intel can build for 15w - 90w of load over a period of hours, even days.

This.

It’s still not the same display. It’s closer in size than the dramatically smaller one than the normal iPhone7, but it’s still smaller, with a smaller pixel density.

But really, in your original comparison, the screen difference was so immense that it would of course be a major component in power consumption.

And yes, while an AMOLED screen is more efficient it’s dependent upon what you’re showing on the screen. Per pixel, they are less efficient, but they have the advantage of being able to turn off pixels entirely. In a situation of web browsing, it’d depend upon what kind of pages you were looking at.

Ultimately, I don’t really care about the argument you guys are having, I was just pointing out that you were comparing things like battery life of a phone isn’t a metric of the processor in the phone, but rather the entire suite of hardware it’s using… and ignoring something like the display was absurd.

It’s indeed a very fair point. Which is why I brought out the iPhone 7+ versus the Galaxy S7 edge, which has a more comparable screen size:

Galaxy S7 – 5.1 inch Super AMOLED, 2560 x 1440 pixels (577 ppi)
Phone 7+ – 5.5 inch IPS LCD 1920 x 1080 pixels (401 ppi)

And it shows the exact same effect – faster SoCs are more efficient:

Yep, it is. If you can find exact figures on how much power that 5.1 inch S7 screen uses versus the 5.5 inch iPhone 7+ screen, feel free to share. I couldn’t, but I’m betting the AMOLED power efficiency plus the iPhone 7+ having more screen to power in physical size, means they end up using about the same amount of power overall.

But the inverter technology doesn’t work as you described, something which you seem to be ignoring now that I pointed out your error.

SpeedStep also dynamically alters voltage, which is something you’re also ignoring.

Indeed, you seem to be busy ignoring all the information that has been given to you because you’re too busy ‘sticking to your guns’ on this.

Which, again, is very reminiscent of a certain other poster who is quite biased and will ignore any evidence that runs contrary to that bias.

Hey @Wumpus

The faster CPU isn’t more energy efficient, I thought you said this couldn’t happen. In fact, I can quote you saying this can’t happen several times.

What gives?

Again, wumpus’ argument is not that a faster chip is not inherently more efficient. But that the faster chip can take advantage of the race-to-idle design better and thus be more efficient.

if you intentionally misread an argument for the sake of wanting to be right, it’s your prejorative. I just happen to disagree with you. And since when did I ignore voltages? I just happen to think that the tangential issue that you bring in does not contribute to the argument. i just happen to thiink it conflates without being helpful. It seems that assigning intentions and misreading other people’s motives and arguments is a pattern that keeps repeating.

Like I say, we keep talking past one another, I tend to avoid antagonistic discourses, personal insults and ad homiem (not withstanding the fact that it may make the thread more exciting, it’s not my cup of tea) so you’ll have to forgive me for disengaging. You can have the last word on this.

Yes, and I posted an example of a faster chip taking advantage of the race-to-idle design and being less efficient. Wumpus argued that that is impossible.

if you intentionally misread an argument for the sake of wanting to be right, it’s your prejorative

I haven’t done such.

I also haven’t talked past you once, I’ve addressed everything you’ve stated, including spending 5minutes disproving your ludicrous statement on ‘inverter air/conditioning’ units, spending a solid 15-20 minutes trying to explain why race-to-idle is less important these days and even more at why running at high frequency is so much more inefficient than running at low frequencies compared to how it used to be because of process shrinkage and leakage, etc etc.

You ignored every single bit of that to keep repeating the same simplistic argument and maundering on about “we’re talking past each other”.

I’m afraid not.

I wasted my time correcting you (with sources!) on inverter a/c units. How is that talking past each other? That’s simply you getting caught up in ‘being right’. I explained the difference between race to idle and DVFS, you ignored that. I explained why race to idle USED to be a big deal but is less important now. You ignored that. I explained how fundamentally chip design has changed because of smaller and smaller processes. You ignored almost every bit of information I posted, and rebutted none of it.

P.S. You’re still talking about SpeedStep as if it’s a ‘race to idle’ technology. It isn’t, it’s a DVFS technology. Here is part of what Intel have to say about SpeedStep

Separation of voltage and frequency changes. By stepping voltage up and down in small increments, the processor is able to reduce periods of system unavailability that occur (sic) frequency change. The system is then able to transition between voltage and frequency states more often, improving balance between power and performance.

&

Enhanced Intel SpeedStep Technology reduces the latency associated with changing the voltage/frequency pair, or P-state. Transitions can be undertaken more frequently, enabling more granular demand-based switching, and the optimization of the power and performance balance based on demand.

Which is what I explained to you several times, and you repeatedly ignored. DVFS and SpeedStep are not about increasing efficiency by maximising performance to reach idle as quickly as possible.

Here’s an anandtech article on Speed Shift (the successor to SpeedStep): Examining Intel's New Speed Shift Tech on Skylake: More Responsive Processors

This task is likely one of the best case scenarios for Speed Shift. It consists of launching four web pages per minute, with plenty of idle time in between. Although Speed Shift seems to have a slight edge, it is very small and would fall within the margin of error on this test. Some tasks may see a slight improvement in efficiency, and others may see a slight regression, but Speed Shift is less of a power savings tool than other pieces of Skylake

Ok so looking at the graph

image

(We’re just counting pixel width of these lines on the graph here, what we care about is the relative difference in the numbers.)

Task energy

RAZR i, Atom 298 × 3.4w 1,013
RAZR m, S4 plus 465 × 2.9w 1,349
iPhone 5 362 × 2.4w 869

Notice how the Qualcomm device uses a ton of energy on the task. 30% more than Intel, nearly 50% more than the A6.

Idle energy

RAZR i, Atom 227 × 1.4w 318
RAZR m, S4 plus 62 × 1.5w 93
iPhone 5 161 × 1w 161

This is what kills the atom – the idle state is not great. It is marginally better than the Snapdragon, but you already knew Qualcomm sucks at this.

Total energy

RAZR i Intel Z2480 Medfield SoC, 32nm 1,331
RAZR M MSM8960 Snapdragon S4 Plus, 28nm 1,442
iPhone 5 A6, 32nm 1,231

These are fairly close overall, but Qualcomm is in a familiar position here at the back of the bus. Most energy consumption for a given task, even with a 28nm process advantage.

Did you? I don’t think you did. There are two examples here, one supports your case, but the other does not.

The Z2480 Medfield SoC is faster than the S4 plus, and uses less overall energy on the same task.

1.4w versus 1.0w at idle is quite a difference – 40%. So yeah, if your idle state sucks, then you’re gonna have a bad time. But that’s the thing about newer, faster chips, they tend to be on smaller nm processes, which are inherently more power efficient. Which leads to lower power consumption at idle.

Intel didn’t do bad here, though. They clearly outperformed Qualcomm at 32nm vs 28nm, plus had better idle power usage, slightly. It does make you wonder, though what happened to Intel? And if you’d like to do an actual (cough) 🍎 to 🍎 and compare the 28nm A7 to the 28nm S4 plus… I think you’ll find that it is both faster and uses less power.

At least based on this data, it looks like Intel is the closest to offering a real competitor to Apple’s own platform from a power efficiency standpoint. We’re a couple quarters away from seeing the next generation of mobile SoCs so anything can happen next round, but I can’t stress enough that the x86 power myth has been busted at this point.

Why isn’t Intel playing in this market, because they clearly have the skills?

Just want to say that though things are kind of overly heated, this is a super interesting discussion.

More speculation. They already have the performance in the A11 (more than enough, actually), and they’ve completed a massive shift in platform from x86 to powerpc before.

Jean-Lois always has a smart take on these things.

Sure… but Apple has to actually do something. A problem they’ve apparently been increasingly unable to accomplish in recent years.

Apple has a hard time doing… anything now. Linus Tech Tips went to the mat in angry at Apple because Apple is apparently unable to repair Imac Pro’s screens even out of pocket because no one has access to the parts and manuals to repair it; neither Applecare nor 3rd party repairs. Dave2d finally got an Apple-ready external graphics card and while it works well for games, is still disabled for GPU enabled rendering for Final Cut Pro and Adobe Premiere - which was the whole point of the project. I’m highly suspicious that the new Intel/AMD combo seen in the Hades Canyon NUC is going to be used in new Apple Macbook Pros, like the Iris Pro / Skull Canyon. But while the Iris Pro was used for months / years before the Skull Canyon, i can’t imagine when Apple - IF Apple - will ever update anything ever again.

Switching everything over to a new CPU is totally technically feasible but would require a level of commitment and accomplishment the modern Apple doesn’t seem to have. Apple today seems content to refine and master the supply chain stuff and crank out the same thing over and over with longer and longer delays between products.

Wow, I haven’t heard the name Jean-Louis Gassée since the BeOS days.