But you’ve added context there. You’ve placed the measurement in a certain budget, which does not exist in the original image. If it takes 4ms but saves 8ms on AMD and takes 2ms but only saves 6ms on nVidia then it doesn’t matter. The net result is 4ms faster for both.
The image was from the video, which had the context…
Aceris
10421
Wrong. The amount saved depends almost entirely on original raster time (assuming the same upscale ratio). So the one that saves less for the same total speed up gives more benefit in both absolute and percentage terms.
18 - 6 + 2 goes from 55 fps to 72 fps.
24 - 8 + 4 goes from 42 fps to 50 fps.
Right, Nvidia GPUs run the reconstruction faster and that could potentially matter in edge cases. Say you run FSR2 in balanced mode on both GPUs and the AMD one can’t quiiiiiite lock at 60fps, so you’re forced to either drop settings inside the game or drop to FSR performance mode.
That could certainly happen, but the overall benefit is so great that you’ll always want to use it if your GPU can’t run the game at your monitor’s native resolution with the image quality settings you desire.
It didn’t, really.
Obviously. My example assumes equivalent raster time. But raster time wasn’t factored into the video. They just isolated the FSR2 computation time which, as I’ve said, means little to nothing on its own.
Aceris
10424
There’s literally a scene where he explains the methodology. For some reason he uses FSR 1.0 upscaling as the baseline, which seems a bit suss and is going to exagerrate any perf differences, but not by that much.
His measurement is comparing the frametime of both, on the same card, against a baseline. I’m really not sure what your point is.
The video is really interesting - he really gets into the details which FSR2 handles less well and mentions the “crunchiness” from sharpening. But at the end of the day they are almost all minor details (unlike FSR1) and you get a huge performance boost.
Methodology /= context. You can tell me how you measured the rain, but that doesn’t tell me if 2 inches of rain in May is low or high, or bad or good, or anything useful at all.
And then he compares that figure against other cards in the image posted in this thread, as if that told us anything useful. Knowing that a 3080 runs the algorithm faster than a 6800XT is not important. It’s certainly not enough to claim AMD screwed up or are incompetent with a statement like:
It won’t necessarily influence your buying decision by itself, but it’s interesting information that may have implications for lower-end AMD GPUs. We don’t know yet. Either way, I agree AMD created something pretty neat for all gamers and should be lauded for it.
I think it removes a qualitative advantage from Nvidia, or the perception of one. FSR may or may not match DLSS but it’ll be seen as the same kind of thing and competition can now be on price, RT, temps, drivers or whatever else :)
It’s the same deal as with Freesync/Gsync.
AMD is offering a free alternative that may not be quite as good as the proprietary and exclusive Nvidia solution, but hey, it’s free, and it works on a much broader range of hardware, including older hardware. It just has to be good enough to spur widespread adoption. And it levels the playing field a bit more.
Are people really out here still only looking for 60fps? Yesterday I caved and bumped up DLSS from quality to balanced (while dropping a few other settings) to get Assetto Corsa Competizione from ~60fps up to ~90fps, which is much more fluid. Triple monitors really want at least a 3070, but these algorithms got my 3060 out here like Lance Armstrong cheating to go fast.
Minimum 60fps, sure. There’s two ways of looking at this reconstruction upscaling stuff.
a) I’ve got a really fast PC but I can’t hit 60fps at ultra quality with RT on!
b) I’ve got an old slow PC and I can’t lock at 60fps at 1080p low!
I guess my minimum is just higher now. I can’t go back. These algorithms still have the same impact for me: They’re performance enhancers. I even learned to live with slight bits of car ghosting in DLSS balanced, although I’ll try FSR 2.0 for comparison when that becomes an option. I expect something similar since it’s also temporal now.
I view using anything other than quality mode as an act of desperation. Luckily with a 3080 playing at 1440p that has never been even close to necessary.
Yeah, it was that or bump other settings way down to hit 90fps minimum. (I’m also putting all view distances at max, cause high speed driving.) My poor 3060 doesn’t understand why it’s being asked to render three screens instead of one. A 4000 series card will make it a moot point.
Basically, yeah. In almost all games I’ll up the quality until it maxes out at 60fps. Sometimes below if it’s not possible to max out quality otherwise (I have G-sync so 50 is usually fine). And on console I’m a 30fps quality/resolution mode over performance mode in most games (to be fair, driving games are the exception, but 60fps is fine). So if it’s a question of native at 60 or DLSS at 90, I’ll take native. I basically never do DLSS performance.
Yeah, this is me.
Today Microcenter has a couple of RX 6600 cards for $329, which is MSRP, right? Is that a reasonable $330 purchase, or should I keep waiting for an NVidia GPU to come down into that price range. (I’ve resigned myself to the sub-$300 price range being useless these days. I do miss the good old days of waiting for a sale on a previous-gen mid-range card for $130…)
It’s pretty comparable to a 3060 and thus really the lowest-end GPU I would recommend a gamer actually buy. Nothing wrong with it, if you need one.
The counterargument is to wait until the next generation releases, which should be pretty soon. But what’s your confidence you’ll actually be able to buy one?
6600 is fine if you game at 1080p.
If you game at 1440p, look for a 6700.
I’m not much of a “gamer” anyway. I do miss playing TW since I gave my previous GPU to my kid to replace his that died, and there is likely at least something to come out recently that would interest me but that I have ignored because I have no GPU (well, I put in an old GTX-560 that was in a drawer). Mostly I run an astrophotography image processing program that is GPU-accelerated.