Funny little nerd story. I was walking around SIGGRAPH this year (its a convention for CGI nerds) and a couple people told me NVIDIA was using deep learning to upres images.
Now I’m right in the middle of designing a neural net right now and so am somewhat familiar with the tech, but by no means an expert. So my initial reaction was “no they aren’t, you must have misunderstood”. But of course I’m highly intrigued. What? Is this possible? Are they using some insane data set to extrapolate objects, lighting, background, etc.?
So so practically run over to their booth and find the display with a nice technical Indian data science spoke woman manning it. I ask her all amused but braced for some stunning thing, as the display behind her WAS upresing a scene.
First thing out of her mouth was “That’s impossible.” I’m like “Ok, whew!”. Turns out they were improving render speeds in the view port dramatically so a 3-D artist could work in render mode more easily. A cool thing, but no Blade Runner.
Maybe next year!