Bleep Qualcomm right in their Qualcomm-hole


I think some of you need to get outside and enjoy the fresh air for a few days. Maybe do some hiking. Get away from technology for a bit. It really does the mind and body good.


A few people buy sports cars but they all admit that it makes sense to still make and sell Hondas.


I love (for certain values of ‘love’) how you’re acting like this was your point all along, and that you didn’t kick off this whole shitshow by asking

And regarding the displays, I’m not sure what data you think is conclusive there. I readily admit that Apple makes very good LCD displays in terms of color accuracy, brightness, etc. But I find the resolution to be unacceptably low for a device I’m spending close to a grand on in 2017, regardless of great lab measurements in other categories.


More than CPU speed? I like high dpi but I am not sacrificing the crazy speed differences between the two platforms for what is a nice, if not great, DPI on iOS.

If Qualcomm was destroying Apple in CPU speed I certainly wouldn’t be on iOS anymore.


2012 Apple fans: Screen resolution is literally the most important thing ever.
2017 Apple fans: Screen resolution doesn’t matter at all.


Yes. I consider the display the most critical component of a phone because it’s in use 100% of the time the phone is, and the vast majority of that time it is displaying some sort of text or media – workloads where there’s no productive use for spare processing power. When I want to compile code or run data analytics, I user a more appropriate tool then my phone. A deficiency in display is far more noticeable than a webpage taking 0.8 seconds to load instead of 0.5.

If Apple-equivalent SoCs were available as a customization option without a ton of inextricable baggage, I’d be happy to pay an extra $50 or so for one. But they’re not.



Circle back to TVs.

HDTV was a huge deal. 1080p and even 720p looked dramatically better than 480i. It had a massive immediate impact. Now we have 4k. 4k is not a huge deal. It’s difficult to tell the difference between 1080p and 4k at most common screen sizes and couch distances.

In this analogy, HDTV is retina screens, and 4k is phones with much higher than retina density. Retina was a huge deal, but beyond that you just can’t tell.

The same sort of thing applies to SoC speeds. It doesn’t actually matter that the A11 is much faster than the snapdragon 835, because it doesn’t noticeably impact the user experience.

Now you could argue that greater performance enables new types of experiences-- and that could very well be true. Maybe AR turns out to be transformative, and this offers Apple a huge competitive advantage, and suddenly SoC single-threaded speed really matters to consumers. Maybe. But that hasn’t happened yet.


Well, to compare the iPhone 8 to the Galaxy s8, I’m pretty sure I can tell the difference between a 4.7" screen and a 5.8" one.


Yes size matters (that’s what she said) but once past a certain threshold, pixel density doesn’t.


A sports car where you have to go find some secret, out of the way road, that almost no one ever drives on, at a full moon, and only in those nigh vanishing set of circumstances will you notice the most slight difference in performance.

Everyday real world use between the phones is almost impossible to discern. I have an iPhone 7+ and a Pixel. Loading up CNN, loading up ESPN, loading up Qt3, loading up WaPo. If I launch any of those sites simultaneously, side by side on the devices, it’s shockingly …similar. The Pixel even wins almost as often as not, which probably means the vagary of LTE or WiFi connectivity has a greater impact. THE POWER certainly has no effect sending texts, recording video, snapping photos or running almost any mainstream app I’ve ever used.

All that said, I’m all for going after Qualcomm. The point that you could do more with these phones if they weren’t programmed to accommodate the lagging Qualcomm chips is well received.


One thing that I find really weird is how Qualcomm is now running TV ads.

I mean… what is the point of those?


I think you’re generally right, except for the “snapping photos” part. Computational photography is what will allow smartphones to improve their picture quality, since the form factor doesn’t really allow for it to be done through significant lens size changes. The more processor power available to the phone, the more it can do in the fractions of a second it’s taking a picture to improve that picture significantly. Apple is already going ahead and making their HDR the default this generation, thanks to those improvements.


Does that happen real-time or after the snap. If after it seems a negligible impact. I’ve had the Pixel defaulting to HDR for a year. I’d occasionally see the processing HDR notification up in the status bar, but it never impacted snapping the pics, and it completed so quickly it never seemed an issue.

I’ll have to see how iPhone 8 and X compare to Pixel 2, but Pixel certainly has a better camera than any any other phone prior to these latest releases.


From what I understand, HDR and the new “Portrait Lighting Mode” are being done in real-time, thanks to the processor improvements.

[EDIT] Yep, confirmed it from this article

This is a good summary article of what you can do when you thrown immense processing power at photography in real time.


“real time” doesn’t really mean anything though in that regard, right?
I mean, it’s not doing anything to adjust the optical sensor, because those sensors can’t change.

Light is hitting the sensor, you push the button… and stuff happens. For photo processing, in many cases, it doesn’t really matter how long that processing takes, right?


Looks as if the iPhone 8 Plus climbed over the Pixel in DxOMark 90 to 94. iPhone 7 was 88.

The Pixel 2 (XL or standard sized) is now rated at 98.

Seems that’s the quality benchmark cited most often.

FWIW, I don’t think the horse race jockeying is much to get worked up about. Both platforms offer amazing image quality these days. It was only about 2 years ago that any Android camera (the Galaxy S6) could even hang w/ the iPhone.


It’s not just about largely subjective picture “quality”, and I’m a big supporter of John Gruber’s ongoing slagging of DxoMark (which, to his credit, he slags when they have an iPhone as #1 or not). Computational photography allows you to do things that go beyond the lens and the optical sensors.

So, no, Timex. It’s the stuff that happens when you push the button that matters. For example, some researchers have shown that if you take the right type and amount of photos, as decided at the moment of the picture being taken based on many ambient conditions, you can get photos where you can change the perspective and focal length AFTER taking the photo. This is only possible thanks to processor improvements. Or, right now, there’s Apple’s “Portrait Lighting Mode” that is doing some magic with the 3D camera mapping of the iPhone 7+, 8, 8+ and X to allow for you to change the lighting conditions in post in a way which you could not if that data was not analyzed and captured when the photo was taken.

This isn’t an Apple vs. Qualcomm thing per se. It’s just stating the fact that improved processors are going to be important in far more than just launching apps. Especially when it comes to phone photography.


I understand that, but it’s all still working on a fixed set of data coming in from the sensor, isn’t it?
As far as I’m aware, all of the post processing taking place with phones’ cameras is just that, post processing.

I guess what I’m thinking here is that since the sensors are totally static (This is the case, right? Maybe this is the part I’m missing.), there’s really nothing you can do “in real time” to affect the image. That is, there’s no processing you can do that will cause the sensor to adjust something and get a different picture. No matter what, the same light is hitting the sensor.

Well sure… Samsung phones were doing that years ago, if I recall. I have an old tablet with multiple cameras that can do this too. But it’s still all done through post processing, rather than anything being done when the image is getting snapped.


In the case of the changing perspective and zoom after the photos, I think you’re thinking of something similar but not quite the same (maybe what those Light Field Lytro people announced?). And you are right, it’s pretty much a post-processing thing that, thanks to processor improvements on the phone, you can do on the phone after you’re taking the photos. I think we’re into semantics now on what is meant by “real time”, but if the photographer can see the effect of this “post-processing” on their view screen and therefore adjusts something like their location or camera settings, than overall, big win in the end result. And that win comes from desktop class processors in phone sized devices.

And in the case of the portrait lightning, it needs to do 3D image mapping at the time of the picture taken, so that’s a “real time” decision to engage those sensors on the camera, or not. I don’t think of it so much as the sensors doing something different, more a question of whether or not they are used at all, and whether or not the data they capture, if used, is attached to the photo for later “post processing”.

For those who are curious, I actually came to this when Vic Gundotra (of all people) hailed Apple’s computational photography efforts.