In the case of the changing perspective and zoom after the photos, I think you’re thinking of something similar but not quite the same (maybe what those Light Field Lytro people announced?). And you are right, it’s pretty much a post-processing thing that, thanks to processor improvements on the phone, you can do on the phone after you’re taking the photos. I think we’re into semantics now on what is meant by “real time”, but if the photographer can see the effect of this “post-processing” on their view screen and therefore adjusts something like their location or camera settings, than overall, big win in the end result. And that win comes from desktop class processors in phone sized devices.
And in the case of the portrait lightning, it needs to do 3D image mapping at the time of the picture taken, so that’s a “real time” decision to engage those sensors on the camera, or not. I don’t think of it so much as the sensors doing something different, more a question of whether or not they are used at all, and whether or not the data they capture, if used, is attached to the photo for later “post processing”.
For those who are curious, I actually came to this when Vic Gundotra (of all people) hailed Apple’s computational photography efforts.