Focus-less camera

So remember how you read those tech articles about “light field” cameras that would just capture all the rays and then be able to focus them in post-processing, the way you can do white-balance in post today?

Well, it’s a product.

They make it sound revolutionary, but reading between the lines, my guess is that it totally sucks. They deliberately don’t talk about the resolution (though the number in the article suggests less than 2 megapixels) or the low-light performance, which means that both of them are terrible. They have no samples, which means the images are hideous. And (worst of all) you can’t use any of your software, and instead have to upload all your images to their server.

I think it’s safe to chalk this one up as being like the Foveon sensor – things that were promised to revolutionize digital photography, but instead aren’t as good as the conventional alternatives.

No, wait, I’m lying about the no sample thing. They do have samples. They’re all interactive, too. It’s kind of neat, really. But the images are pretty weak, worse than cellphone quality.

Oh, that’s kind of neat. Those are better than my cell phone=) But definitely worse than my wife’s camera. It is kind of cool to be able to change the focus though.

I have a feeling this is more like one of those things that will revolutionize digital photography in like 5+ years. (and if I had to guess, more like 10-15 years, but you never know with tech these days)

If you’ve got a good cellphone camera, shrink the pictures down to equivalent dimensions, and it should be better. Plus if you look at this one and focus and zoom in on the background, you’ll see a weird grid pattern thing.

It is neat, though.

It sounds they use a lot of the resolution of the sensor just to capture focus information. That might be why the resolution is so low so far, because a 16MP sensor only gives you 2MP ‘effective’ resolution.

I doubt it will be long until someone reverse engineers the format, and the resolution problem will be fixed in time. Will be cool to see this product in a few years time, but the res is a bit poor so far.

This article has a good tech overview of the technology and the issues around it.

I read last week about Adobe demonstrating a system that recovered from any focus errors (blur) and produced the original image. It was reported to be absolutely amazing. Sounds related… except the sucking part.

Completely unrelated. The Adobe thing works with conventional images, and corrects motion blur. This thing works with a special lens/sensor arrangement to produce an unconventional image, and then uses that to calculate focus.

The Instagram hipster polaroid loving crowd is gonna eat this up.

The whole notion of uploading the pics to their server and displaying them interactively is a little wonky. I’d prefer to just shoot them, choose a focus point and save it as a standard image for sharing.

As for the concept, I think its pretty neat. The actual product leaves much to be desired in its current state. For this to be useful, I’d want it in my cellphone. Low light performance would also be key. They are claiming an F/2 lens which should be good in low light, but with all the processing going on, it might not act like a normal F/2 lens would. My (completely uneducated guess) is that low light turns everything into a big gray blob that doesn’t contain enough info to properly reconstruct a photo.

In principles of operation, yes – completely unrelated. But it’s striking how similar the comments are for both of them, evoking the Blade Runner image manipulation scenes and finding things that weren’t immediately apparent through re-focusing (ignoring the movie’s maneuvering around objects bit).

I don’t think it will ever fit into a cellphone. As per mkozlow’s techcrunch link the image stored is like a 32x32 grid of the same image at slightly different focus levels. You lose a massive amount of effective lens quality and megapixels because of it, which will probably make doing it with the tiny things in cellphones impossible for a few more years.

Yes, that’s a safe assumption. I would expect the electronic component pieces will eventually scale down enough to fit into a phone, but the lens elements will take longer. At some level, the laws of physics will come into play, and those will likely be much harder to overcome.

If you’re berating me for being inconsistent: I agree with what you said.

I discussed this with a professional photographer a few months ago when they first started hyping it, and his take was that the image quality was going to be constrained by the total data collected for each image. Apparently, the ability to do this has been around for some time but it’s always been based on much larger devices due to storage capacity concerns. There’s some new system logic built into this device to work more optimally with less data in exchange for a slight degradation of quality, so it should work fine for the needs of most people. Unfortunately, all of the above went way over my head and so I forward it on to you guys to do with what you will.

I think this photo taken after the STS-135 landingis a good indication of real-world limitations. Unlike the carefully set up shots in most of the photostream, in this real world event, you see that you can’t magically bring any part of a photo into focus. Clicking around the shuttle name, orange platform in the background, etc. don’t bring those areas into focus. And the detail on the orbiter tiles isn’t good even on the parts that actually are in focus.

You can focus on any place, it’s just a super low-res photo. Which is the fundamental problem with it.

I don’t understand much about photography, but is it inaccurately reductive to say that this just takes a lot of simultaneous photos at different focus levels, then collates them all into a single file so you can apparently shift the focus of the resulting image? Because it’s a lot less impressive when described that way.

That is inaccurate, yes. What it’s really doing is taking a lot of images of the scene from different angles, and using the variation in them to calculate the path that the light beams are taking, and then setting focus based on aligning the rays for a particular depth.

So you’re not just selecting one of the pre-captured images when you focus, you’re doing a computation on them that delivers a composite image focused at that place.

The guess I’ve seen online is 9 distinct focus zones. I was originally curious about this but the sample photos aren’t that interesting and the news that they’re locking down the export workflow isn’t helpful.
The original white paper had an image with a ‘full’ depth of field, but I haven’t seen one in the new ones. That would be really handy for group shots.