Ya, I’m totally aware of all the awesome image improvement stuff that software lets you do, and it’s obvious that better processors let you do it faster.
What I’m talking about when I diferentiate “real time” is a situation where you actually have a dynamic light sensor, and thus the software is able to look at the image, adjust the camera to gather more data, and then snap more pictures. This is a thing which exists, but as far as I know doesn’t occur with cameras on phones, since they are totally static.
For the cameras, the reason I said you just snap the picture, and then stuff occurs, is that technically it doesn’t matter if that processing takes 1 milisecond, or 1 hour… the end result will be the same.
but if the photographer can see the effect of this “post-processing” on their view screen and therefore adjusts something like their location or camera settings, than overall, big win in the end result. And that win comes from desktop class processors in phone sized devices.
This makes sense, in that it improves usability of the device by allowing the user to see the final result.
I guess that’s a possible thing, in that the dynamic hardware loop ends up being a modification of which sensors to use… although, in such a case, I’d suspect that you could trivially just activate all of them without much negative impact, all the time, and then just use the data if determined to be necessary later.