Image quality matters and FSR1 looks like shit in most games and scenes. Doubling your 4k framerate is easy, run the game at 1440p and scale it up. Making that look good is a more challenging problem.

If FSR1 is truly worthy, we’ll see new AAA games using it in a year. Want to make a bet on that one?

Now that’s an argument against it that actually makes sense, although I’d bet against DLSS on its fastest modes too.

Right, DLSS in performance mode doesn’t look great either-- but it’s far better than FSR1.

Anyway, no need to debate a completely dead technology.

We’ve been very skeptical about FSR 1.0 when it was released, and perhaps still are. FSR 2.0 however simply looks great whilst it can offer substantial performance increases. As a technology, we do not need to devote pages to explaining how AMD’s latest feature, FSR 2.0, operates. If you need quicker framerates and implement this technique, your framerates will be significantly increased. This remains to be a temporal scaler, which has existed for years. In games, they have developed a reputation for image quality sacrifices. The challenge is that some people will dislike it, while others will not even notice it. We believe the quality mode settings are incredibly sufficient.

Interesting article. At the end they say FSR2 will eventually be incorporated into the driver like Radeon super resolution, though, which is simply incorrect. Expected better from guru3d.

Only looking at still angles misses a lot of the potential shortcomings when the multi-frame approach of the temporal algorithms FSR 2.0 & DLSS use is most prone to weakness in motion. It’s not completely irrelevant in games where you might stand around a lot, but I can definitely understand why Assetto Corsa Competizione would be an outlier. That’s a game with fairly simple modeling but a LOT of high speed motion that can change anticipated speed & direction quite rapidly with variable acceleration. It’s little wonder the fastest DLSS settings produced so many ghosting artifacts.

6800XT as fast as advertised, and runs quieter and cooler than my Vega56 while running circles around it. Those leafblower coolers sucked.

My old 1060 was a leaf blower style card. I liked it for my cooling setup. CPU was water cooled pushing air, with pull fans in the front of the case, it was pulling in cool air and both the cpu/gpu were pushing hot air out of the case.

But yeah, for most setups they are not ideal.

The nice thing about FSR 2.0 is that it even works on older Nvidia boards.

Although the improvements are modest the older the board is. Still, it’s better than what DLSS can deliver to them (which is zero).

Yes, I found that pretty interesting. Older architectures are lacking something to make AMD’s reconstruction perform well. Probably compute.

Steam Deck is RDNA2 though. Looking forward to seeing that comparison. Many people use FSR1 in performance mode on the Steam Deck since the screen is so small anyway-- FSR2 should help image quality rather a lot.

DF weighs in


So FSR2.0 is more expensive on AMD cards. Jeez, can’t they do anything right?

It’s still good tech; you’re using ~4ms of your allotted 16.67ms on the reconstruction but the game overall runs much faster because it’s running at a lower base resolution.

Do these algorithms increase latency as the cost for smoother framerates?

“Added milliseconds” in a vacuum is about the most useless statistic I can think of.

This is more a chip shortage in general, but doesn’t Nvidia use Samsung quite a bit? TSMC is also planning further price hikes

No vacuum, each frame has 16.67ms to make 60fps. So costing 4ms is significant.

Input lag is typically tied to framerate, and these upscalers all improve that, so I would expect latency to decrease. It does for DLSS2, but I haven’t seen it tested with FSR2.

Ampere chips are indeed fabbed at Samsung.

That’s the trade-off. The entire idea of this tech is to get a more playable frame rate. Frame times are meaningless if you’re struggling to hit a playable 30fps in the first place.

Well, if you’re trying to hit 30fps then your frame time “budget” is 33.33ms, making 4ms a much smaller proportion, so it still matters. But yes of course, you’re getting 25-45% better overall frametimes upscaling a lower resolution base image. That’s the point of the tech.

The frame time increase measurements are useful to help understand how each approach scales on different GPUs from both AMD and Nvidia.