The future of facial imaging technology

This video absolutely blows my mind. Can’t wait till we start seeing this level of technology in games. Once tools start reaching this level of sophistication in general, the cost requirement to actively develop games will drop by a huge amount, possibly magnitudes.

edit: Link to the researcher’s site:

That was awesome to watch. I sat there thinking man … the things they could do with that. From just a 2D picture image. I agree with the poster on youtube, think of what that will do to missing persons in estimating what they might look like.

Yep, thats impressive alright. Soon, face models will join voice actors as essential components of a AAA game.

Edit: Also, anyone morbidly curious as to what the “female” version of yourself looks like?

Wow thats some impressive stuff right there.

That was very impressive.

The original max male looks a bit like Roger Wong.

Hasn’t this sort of technology has been a standard feature of hobbyist 3D programs like Poser since the 90s?

Morph target animation is as old as the hills, but there is a fuckton more than that going on in this video. The 2D->3D mapping where they took Tom Hanks face and got actual model data from it (it still looked like him even when they pulled the texture) is mindblowingly awesome, but it may be hard to see why if you’re not familiar with the current state of 3D graphics. I suppose there is always a chance the demo video was cooked to make it look cooler than it really is (if so, they are really clever because they left in a couple of minor but visible problem-area things like the coloring on the right side of Tom Hank’s nose), but if their system is anywhere near as hands off as they made it look and is actually capable of what they show, it is lightyears beyond Poser, etc, which require painstaking setup to get good results and then still don’t work anywhere near as good as their system appears to when morphing between different emotional states, female<->male, etc.

Funny thing is, the paper behind this was published at SIGGRAPH 99:

So in theory with enough sample states, you could create a whole range of different looking characters for a game (say, a successor to Oblivion)?

The other thing i was thinking is, if they can put down two ‘states’ and then slide between them, could they (for example) take a family of NPCs, and the offspring have their features set based on the two states of their mother and father, so that you can look at the family together and say “hey they are alike”?

Although with the amount of characters in Oblivion, does anyone even look at the faces anymore?

Kind of like virtual Hatfields and McCoy’s. I like it! Every time you revisit the area another billy-joe-jim-bob might have spawned creating a little in game comedy.

It they were more lifelike and not all bland, yes. If they included attractive people in the mix, yes. Some of the faces in Oblivion were cool but there wasn’t much variance or extremes.

That part was pretty cool, I grant you. Specifically, whatever algorithm they use to pull a useable depth map out of a single photo. The rest of it is just camera mapping and subpoly displacement, both of which have been around for a while. And I suspect there’s more setup than what they show in that video. Still neat, though.

I’m sure you could do exactly that. Just toy around with the demo for FaceGen Modeller sometime. It offers this kind of feature - you make a face and save it. (Okay, you can’t save in the demo, but it includes several pre-built faces so that you can experiment with this.) Then you create another face. Now you can have the program calculate the “average” of the two faces’ values. It’ll then create a slider so that you can fine-tune how much the final face favors either your first or second face.

This thing looks much more detailed, easy-to-use and accurate than FaceGen. Truly amazing. I’ve tried a few times to recreate my face in FaceGen and it does a merely decent job, and that’s after you take three photos from various angles so that it can better calculate the contours. Whereas this was just… damn! All that from one photo?! I don’t know much about 3D programming at all, but that just strikes me as insanely great.

More vids here:

I wonder if you took a face of a cat, and then could you morph it all to create a race of Cat Men that look very realisitc?

Cue the furries… reminded me the time I went to the dentist and he made my tooth filling with this cool 3d computer program right in front of me, then sent it to a 3d printer to be made within 10 minutes!

That set of 3d different people’s scanned faces and different face expressions for one individual helps a lot. They basically learned the space of 3d faces and then try fit where in that space a particular photographic face falls. Very nice stuff.

I know you were joking, but I doubt this particular thing would work for a cat. At the optimization step, it’s trying to tweak a human face model to fit the appearance in the image. Since there’s no cat face in the space they are optimizing over, the optimization would probably just get stuck on something that doesn’t look much like the cat at all.

It does raise the question of how their appearance model handles beards, moustaches, shades and facial tattoos, though.

I definitely don’t want to downplay the significance of this for gaming, but yes: in fact, computerized rendering and understanding of what makes gives faces their faces age, gender, race, and other attributes had been understood by psychologists for a very long time. The famous automated caricature programs (that can create cartoons of faces from drawings of them without any human input) have been around since the 1970’s.

Google automated caricatures and look for the famous automated images of Ronald Reagan.

The only new technological stuff here is the ability to scan a 2d image and make it 3d. This has an obvious impact and I suspect it was what John Carmack was looking into when he said he was doing “research with webcams” in between projects.