Apple CPU vs. Intel CPU.. Fight!

The Great Satan Qualcomm! Which none of us have mentioned.

I know you don’t read the actual words in my post or you would have realized that everything I am talking about is Calling from inside your iPhone in your own pocket!

The hilarity is that I am arguing that the hardware inside the very A11 fusion that you worship is there for a reason and you’re so far in a fevered dream that you believe that I am talking about Qualcomm.

Incorrect, as usual:

Really, interesting! The very fast CPU that is wildly more power efficient than its contemporaries using the exact same design methodology?

Thanks for sharing @lantz! Hey looks like those internal implementation details really matter, eh? Who knew? You can’t just rub “big/little” marketing speak on your CPU and have it take effect. Shocking, I know.

I’m actually flabbergasted. Just how idiotic are you that you still don’t understand? A CPU at a newer process can be clocked lower. Do you think there’s some kind of process police that stipulates a processor on a new node is forbidden from being below a certain clock speed?

The third article does not say that. You’re more than welcome to quote where it does. It says that the CPU being able to respond quicker to load results in power savings, not that it is more power efficient for being absolutely faster. It’s referring to dead time, when the CPU is operating a high frequency even though the task has finished because it’s reacting slowly.

Of course, you’re quite genuinely incapable of reading or understanding so…

He’s got to be trolling us at this point right? He’s arguing with me that the A11 performs due to a divine soul and the mortal flesh of the technology it is built upon doesn’t matter? My compliments of it’s technology seem to be an affront to his religion.

“A faster cpu is more efficient because it’s on a smaller process node”

“So it’s not more efficient simply because it completes tasks quicker?”

“It is”

“But you ju-”

https://forum-cdn.quartertothree.com/uploads/default/optimized/3X/e/5/e5cb1da4cf749d20804807751ce91b7422046c13_1_690x271.jpg

“DID YOU SEE THIS GRAPH @USERNAME?”

Let’s see… nope, as usual, you’re wrong.

image

The iPhone 5 brings new meaning to device level power consumption. With a larger display and much more powerful CPU, it can easily draw 33% more power than the 4S under load, on average. Note the big swings in power consumption during the test. The A6 SoC appears to be more aggressive in transitioning down to idle states than any previous Apple SoC, which makes sense given how much higher its peak power consumption can be. Looking at total energy consumed however, the iPhone 5 clearly has the ability to be more power efficient on battery. The 5 drops down to iPhone 4 levels of idle power consumption in roughly half the time of the iPhone 4S. Given the same workload that doesn’t run indefinitely (or nearly indefinitely), the iPhone 5 will outlast the iPhone 4S on a single charge. Keep the device pegged however and it will die quicker.

See if you can read the words above and process them in your brain. Good luck! You’re gonna need it!

You are so adorable. Bless your little heart.

It still doesn’t say what you think it does.

The iPhone 5 was more power efficient than the iPhone 4S, that is irrelevant to the statement you made.

The 5 was quicker to go from MAX to MIN than the 4s, not that the 5 was more efficient because it completed the tasks faster. More efficient because it could ramp from MAX to MIN, which is irrelevant to speed of workload. It’s deceleration they are talking about, not distance covered.

I’d rather be adorable than to have conclusively proved over and over that I have the reading and reasoning ability of a below-average 3 year old child.

In fact, it does. You just don’t want it to contradict whatever crazy ass narrative is going on inside your head.

So, have a nice life, I guess?

“Nuh uh, I can’t show where it says what I want it to say but it does say it and you’re just not seeing what isn’t there and I can’t show you”

-Wumpus, 2018

I think i’m all wumpus-crazied out for a while. You can have the last word!

The crazy street preacher is always willing to keep proselytizing! Eventually we all have to move on.

It was quicker to go from max to min because it completed the work faster. You can’t go to min when you haven’t finished the work. That’s nonsensical. “Hey, I’m in the middle of this work but I’m just gonna slow down for no reason.”

Yes, it ramped from max to min when it finished the work. Which it did nearly twice as fast as the previous CPU model:

image

And a dragster only starts decelerating when it reaches the end of the track, that is, it has finished the workload. A faster dragster will begin decelerating sooner because it reached the end of the track first.

You could fairly argue it is a combination of these things – that newer CPUs are invariably faster and use less power, because they are on a smaller nm process.

image

But speed and process size (efficiency of power usage) go hand in hand. It’s literally unavoidable. They simply do not build slower newer CPUs; no company that wants to stay in business would do that.

So, I dunno man. Good luck with whatever the hell it is that’s in your brain here. I am not sure what you are tilting at, but you should take up another hobby.

WHY DOES THE A11 HAVE LOWER POWER CORES IF THEY DON’T WORK? IS EVERYONE AT APPLE WRONG?

They do work. The question is why don’t the ones in Qualcomm’s hardware work, since the battery efficiency is so much worse? But how much worse do I mean? Let me dig up some new data, just for you. I know data scares you, since it doesn’t match your preferred narrative, but bear with me.

Per this article

Galaxy 8, Snapdragon 835 9.50 hours
iPhone 7 9.22 hours

Pretty close! But let’s look at battery sizes in the devices…

Galaxy 8, Snapdragon 835 3000 mAh
iPhone 7 1960 mAh

Therefore, efficiency in milliamps per hour, under real world web browsing use, are…

Galaxy 8, Snapdragon 835 316 mAh / hr
iPhone 7 213 mAh / hr

This means an iPhone 7 with a Galaxy 8 sized battery would achieve over 14 hours of runtime in this same real world web browsing workload.

Looks like the internal implementation details of these CPUs must be different, wouldn’t you say… and in such a way that makes one much less efficient than the other?

Hmm, imagine that.

Galaxy S8 - 5.8-inch Super AMOLED, 1440 x 2960 pixels (570 ppi pixel density),

iPhone 7 - 4.7-inch LED-backlit IPS LCD, 1334 x 750 pixels (326 ppi)

I suspect that might be part of the difference

Hahahahahahahahahaha!

I picture your avatar as how you look everytime you read another one of wumpus’ posts.

@wumpus, I think you drew the pink boxes wrong. It needs to cover the base areas.

Unless the voltages drop to zero, the efficiencies should be measured in the low 10ish of percent or less. The pink highlight boxes amplified the differences and could lead to a misinterpretation of the results.

I added the highlights to indicate that the power savings is not as great as we may think between the generations, just based on the race to idle methodology. There are savings, but it’s in the lowish %.

001

And it clearly shows that each generation improved the ability to dynamically decrease frequency while working on a task, while also reducing the idle power consumed.

The 4 wasn’t able to respond to varying workload and ran at effectively maximum for the length of the task, while the 4S and particularly the 5 were able to dynamically respond within the task.

The question then is, if the iPhone 5 CPU were clocked so that it took the same amount of time to complete the benchmark as the iPhone 4, would it use less overall power than when it runs at maximum speed and races to idle?

It’s not possible to conclude one way or another with just this data points because as the die and processes shrink, the current (which affects power most from what I remember from electronics) drawn gets vastly reduced. The chip makers tend to add a lot more circuitry for increase in processing power. So it’s hard to run a controlled test across the line of chips, which tend to pack several design innovation per iteration.

I do believe what wumpus is contending, just that it’s hard to find data for support due to the nature of the chips.

002

This offers a more charitable interpretation of the data posted by @wumpus.

I do think it’s highly dependent on workloads. In cases of non-game usage where the UI is waiting for user input most of the time, idle times should overwhelm and dominate.

Edit: This is what I mean.