Yeah, it’s very cool. I’ve been playing around with it more and it’s totally transforming old photos.
But, lol, on my Intel 11th gen with Iris Xe graphics, it takes about 5 minutes to denoise a photo. On my 5900X with a 6800XT, it takes 10 seconds. Oh, Intel. How do you continue to suck so badly?
The only real negative is that it also creates a separate, denoised DNG, which is more than annoying. Adobe does mention they’re working on hopefully just making it a separate layer of the existing photo.
That’s very nice. I wasn’t convinced watching the video, but seeing your real-life example, that’s a pretty huge difference and the finished photo looks great.
GPUs are wildly fast for some sorts of math transforms (floating point matrix operations) compared to CPUs. It’s why you have a graphics card for the video games. APIs to use GPUs typically only work with Nvidia GPUs, so this isn’t really an “intel sucks” thing. In the APIs I use, having a Nvidia GPU can be up to about 50x faster than running on CPU.
When I view those images on my 12.9” iPad just in line with the other posts scrolling along, they look pretty close to identical — I don’t really notice the top image’s graininess. But when I pinch zoom in big difference between the two.
Is there any downside to the technique other than the extra time to run the process? Does it add any distortion or anything like that to the image when it’s overdone? It certainly seems to be all win from what I’ve seen on your example images @Woolen_Horde.
The only downsides I’ve encountered thus far is the
Creating a second DNG. I’m on the Creative Cloud, so potentially this means eating up even more of my 1TB of space. I hope Adobe fixes it so you don’t need to create a copy.
There can be an issue with human faces in the background. You can get funky results if you look closely.
But for the most part, it’s amazing. Totally transforms a lot of my old photos.