Yeah, that’s sort of similar to the upgrade that I’ve slowly been putting together. I’ve had an audigy 2 platinum under my bed for the last 3 months. The CPU and MB should be here next week, and I’ll go memory shopping afterwards. Also, the X800 I want is out of stock everywhere.
Enough about my problems. Did everything work out of the box for you?
Pretty sweet, though I’d go for a GeForce 6800 series over the x800. Performance with current games is neck-and-neck (the x800’s benchmark wins are just a few FPS when it does beat the 6800), but when pixel shader 3.0 stuff starts shipping, the GF will have an edge.
The 6800GT looks like the sweet spot. Real-world performance very, very close to the 6800 Ultra, but $100 less, one slot, and one power tap. (And you should be able to overclock it to Ultra speeds.)
Since you’re not saving on much here, I’ll just recommend upgrading to the Logitech Cordless MX for Bluetooth or the Logitech Di Novo (also Bluetooth). I have the first at work and the second at home.
Best. keyboard. ever.
I wouldn’t recommend the Di Novo for work - but at home, I can leave the computer to go to the kitchen and bring the mediapad with me and still control all my mp3 playing needs from there as well as keeping tabs on IM’s, mail and mobile text messages. And it looks great too.
You could be right on the 3.0 comment, but there have been some recent tests that show the X800s going through shader code in fewer passes than R300 hardware (9500-9800s), and reducing the # of passes is going to have more of an impact on overall performance than branching or predication (for the shader lengths we’re talking about this year or next). Hard call to make, but all else being roughly equal (stability, image quality, performance, etc.) it’s hard not to favor the part that’s more feature-rich.
Time will tell. I’ve got a PCI-e 6800 Ultra I’m testing for Intel’s launch and it’s definitely a screamer. I slapped an old favorite, Sacrifice, on for fun and at 1280x960 with 8xS AA and 8x AF I was still pulling around 75-80 fps, which is just mind-boggling if you know how that game performed when it was first released.
My wife and I run a daycare in our home. Anything that isn’t attached, nailed down or too heavy for a 4 year old will be moved around. Its bad enough I have to hunt for the Tivo remote every time I want to watch TV, I don’t want to hunt for my mouse every time I want to use my computer.
Where did you get the ATI x800 Pro? I’ve been trying to find one for weeks now, other than on Ebay.[/quote]
You could be right on the 3.0 comment, but there have been some recent tests that show the X800s going through shader code in fewer passes than R300 hardware (9500-9800s), and reducing the # of passes is going to have more of an impact on overall performance than branching or predication (for the shader lengths we’re talking about this year or next).[/quote]
I’m pondering the X800/GF6800 choice myself right now, and it seems like the SM 3.0 issue is the usual point made in favor of the 6800. However, when are we actually going to see 3.0 in games? If it is Christmas 2006, then I don’t care because I’ll be buying a newer card by then.
Here’s HOCP’s article on SM 3.0. They’re pretty critical of the 3.0 support. At the end of the article, they have some interesting quotes related to the actual use of 3.0 in the FarCry SM 3.0 screenshots NVIDIA provided.
The other thing I haven’t been able to find out is how much the hardware is expected change as 3.0 support matures. Is 3.0 support going to be like the first implementations of AA and AF? I still don’t turn those on with my GF4 because of the performance hit. If the GF6800 is to SM 3.0 as the GF3 is to AA/AF, then I don’t see any reason to buy it because it won’t overcome the performance hit.
Overall, the present SM 3.0 thing strikes me as marketing hype. I’m sure that will change, but I guess not in this hardware cycle?
Supertanker sum up my thoughts on the issue pretty nicely. In the end, I just looked at the benchmarks (which were similar) and the graphic quality and the X800 was noticable better. So I went with the X800
It’s a tough call to make, because it obviously requires a person to be psychic to predict the level of support a piece of hardware’s feature might or might not receive within its lifespan. That said, I do tend to believe that the 2.0 model will be the inflection point for DX9 simply because of the fact it’ll be supported this year by all three major players, Intel, ATI, and nVidia, whereas 3.0 is supported only by nVidia (at least until R500 hits the market).
But marketing deals do occur, and there’s little doubt nVidia will leverage their TWIMTBP relationships with developers to get some degree of 3.0 support this year. In fact, I know of one developer whose Xbox/PC title will support PS 1.1 and 3.0, but not 2.0. The 1.1 support is obviously for the Xbox and 3.0 for the GeForce 6 series.
It’s an interesting situation, far more so than last year IMO. nVidia took the transistor hit to include these features (and FP blending), which has caused lower clock speeds and therefore less performance (not necessarily lesser than the X800’s), whereas ATI stuck with their current architecture, made some minor tweaks to it, and amped the clock speeds on it.
One thing’s for sure, the texturing optimizations (view angles, MIP maps, LoD biases, etc.) and other shortcuts for these $400-500 USD boards need to stop. Individually they perhaps don’t affect image quality to an appreciable degree, but combined they certainly do.