Nvidia DLSS, this thread has been visually enhanced.

Time for its own thread as news has been posted and spread over a few different forum threads these past three years.

Some history:

Newest version of DLSS is 2.3 , also some news on Nvidia Image Scaling for people without RTX cards.

I wasn’t actually aware of the Image Scaling option before. While I assume it is crap compared to DLSS, if desperate for performance… time to go give it a try.

Yeah they have a screenshot or two showing its the next best option, if you don’t have a card that supports DLSS.

Wait wait wait… this is some CSI “zoom in” voodoo magic. The text isn’t legible in native 4k, but is perfectly legible in DLSS. That’s crazy.

I could be wrong, but I think the Native 4k is probably illegible because of AA, which is always single frame based (remember, that’s heavily zoomed in).

THeoretically since DLSS knows how it has looked over multiple frames (with probably sight movement) it knows what it should look like without having AA mess it up

Stop. Enhance 15 to 23. Give me hard copy right there.

You’re right in that DLSS works based on multiple frames, so has seen a couple different versions of that text to give the AI good data to reconstruct from.

But some AA does use multiple frames, the so-called TAA (Temporal Anti-aliasing). I think TAA is still a base-level requirement for DLSS to work, so it’s probable the pictured game is using it.

Shhhh… voodoo magic.

It is, TAA is a much simpler version of the same thing. It tends to look a bit blurry, which DLSS fixes.

Unreal Engine 5 has “Temporal Super Resolution”, which is essentially DLSS not restricted to Nvidia hardware, but only for Unreal games.

And finally, Intel has “Xe Super Sampling”. XeSS is essentially DLSS that’ll work on any hardware, on any engine. This is the alpha and the omega, what everybody wants/needs.

There’s also AMD’s Fidelity Super Resolution (FSR) which doesn’t use motion vectors or multiple frames and thus sucks, and Nvidia’s “Nvidia Image Scaling” which is basically FSR and thus also sucks.

Yeah TAA is using the data from multiple frames to stabilise and smooth the image, rather than upscale and add new pixels like DLSS does.

Of course there is also DLAA, which I’m not sure many games support, that is like a middle ground - fixes the TAA blur without upscaling.

DLAA is simply DLSS without the upscaling. I didn’t bother to mention it because it’s only in one game, Elder Scrolls Online, because that game is pretty old and runs great on any Nvidia hardware with tensor cores.

XeSS is where it’s at, that’s what we all should want to succeed. Cross-platform, full-featured. Of course it isn’t out yet so we don’t know if it’s real.

Yeah the Intel one, same kind of tech but they’re supporting other hardware vendors.

Nvidia has such a lead in refining their algorithms though, DLSS just keeps getting better at handling edge cases. The latest round of improvements apparently fixes particle ghosting. And DLSS 3.0 will probably be out by the time we get XeSS 1.0! ;)

*for some reason an edit duplicated my post? Deleted the other. :P

And let’s not forget that DLSS is optimised to run on tensor cores, and trying to do that kind of math on a general purpose GPU is going to kill frame rates.

Intel will probably add similar tech to their cards at some point, but they also need to do all the api work to connect xeSS up to tensor cores as part of a rasterisation pipeline. Or xeSS just ends up being the upscaling for intel cards

Per Intel it will have greater performance impact on GPUs without the Intel Xe AI acceleration (with yet another helpful acronym, XMX) but will run on every GPU and still be much faster than running at native resolution.

I wish nVidia would spend their effort on something important, like training an AI on 80s and 90s anime so we can run an algorithm that makes modern anime look good again.

“Enhance!” is already here, man:

Each layer costs compute. There’s no way around that. You need to be able to run your model in ~6ms, absolute max (that’s assuming 60 fps and you use 33% of all your compute just on supersampling). It’s called “deep” learning because it uses lots of layers.

Bear in mind that the tensor cores on Ampere might not be a large proportion of the die, but for 4X4 matrix multiply-adds offer something like 4 times the computational capacity of the CUDA cores on the chip.

It seems likely that if you ran DLSS on general purpose CUDA cores that would be slower than just rasterizing at the higher resolution!

Now sure, you can make a model that uses fewer layers and run it on your general purpose GPU FP units. And I’m guessing that’s what intel are planning for their “cross-platform XeSS”. I just don’t believe it will be any good.

Well, we just don’t know yet. Intel claims otherwise.

Are any of those actually newly added (apart from the ones that aren’t out yet), or is it a summary of what already exists? I see PSO2NG is new in the last few days, but otherwise it seems like stuff that was already implemented