DX10 on Company of Heroes - Ouch

Outside of the enhanced explosion from the bunker I’m not noticing anything hugely different. I tried for the heck of it setting my graphics to the highest possible and I didn’t get a difference, infact my performance test seemed to have gotten worse then before the patch. I’m running on direct x 9.0 .

This may sound like a stupid question, but are people expecting DX10 to make games prettier or faster? Because I kinda presume that, on the same HW, it won’t do both

Sorry, I missed it.

The problem is that a lot of applications set the AA up wrong under DX 10. You may say you want 2xAA but the application might set it to 16xAA or in some cases double 16xAA. Unless AA is turned off, any application can screw this up. NVidia’s control panel sets it up correctly and by making it override application settings, you make sure that an application doesn’t set it up wrong.

I did this and got a huge increase in City of Heroes and WoW.

Under certain circumstances it could actually do both. Geometry shaders could be used to make multiple instances of the same geometry (eg. generic soldiers) render much faster while simultanously upping the texture sizes and rendering far better shadows basically for free. Of course, we won’t really start seeing that sort of thing until DX10 is a primary target for a game rather than an add-on renderer.

In any case, in the screenshots posted the textures are very notably significantly higher quality in the DX10 version, though ultimately the difference probably isn’t big enough while actually playing the game to be worth the sort of framerate drop reported here.

If you follow the link to the relic boards in my screenshot post, you’ll note that the dev discusses performance vs/ effects. Relic opted to leverage DX10 to achieve enhanced visuals, over a performance increase.

Also, yeah, I didn’t time the screenshot properly, but I’ve got a day job y’know? I’m sure Extremetech et al will have a number of thorough articles comparing DX10 visual quality & performance w/ DX9.

Also the lighting seems softer in the DX10 version, less hard edges too it.

When games that are actually designed with DX10 before their release start coming out let me know…these “DX10 patches” are BS.

Mono - it looks from your high scores that you have vsync enabled. It is enabled by default in the 1.7 patch.

You have to add the command line “-novsync” to disable it now. Might want to do that and run your numbers again. What I’m seeing, with the GeForce 8 cards, is that the performance hit is even greater than what you listed.

You know how to hurt a guy. Hit him right in the vsync. You’re absolutely right Jason. I’ve turned off vsync via that command line switch, and yes, the DX9 vs DX10 results now show a much greater disparity. I’ve edited the OP to reflect the new scores.

I realize this is a hypothetical discussion, but would you say that feature is going to get a lot of use in games that require DX10? Reusing the same model for objects that should be unique, like people, has always been sort of a cheat. I would’ve expected that part of increased art budgets have been going towards making more objects unique.

Now that rendering high-poly meshes is a reality, reusing the same base mesh model is a perfectly good idea as long as you combine it with something like morph targets and dynamic texturing. For an example of what I mean, take a look at the avatar editing systems for games like Saints Row. You can make a lot of unique looking variations of humans off of one copy of actual model data. I suspect many games will take this route where they have only a few base meshes for humans but use procedural algorithms to vary them for uniqueness.

Doesn’t vsync produce tears? Also wouldn’t turning that off hugely mess up your game if run on sli/crossfire?

In my lifetime of video cards (Voodoo 1, TNT2Ultra, GeForce 1, GeForce 3, 4, 5700, 6800, and finally 7900) I have never seen visual tearing in any game I have ever played with any of those cards with VSync disabled.

You must have slow eyes because tearing is pretty much unavoidable when vsync is disabled. Having said that, I’ve never really been bothered by no-vsync tearing, but is is very easily noticable.

Microsoft’s marketing machine basically told us that would happen… in not so many words. So yes, I can see why people are expecting better graphics and improved performance (or at least no performance loss).

This has happened with every generation of DirectX that also paired up with a new generation of GPUs:

DX1 with 3D acceleration (especially bilinear filtering)
DX7 with hardware T&L
DX8 with programmable shaders
DX9 with shader model 3.0 (especially HDR)

… and now DX10.

Given that this is an RTS, the actual gameplay (as opposed to the benchmark, which is very GPU intensive) will likely not be as slow, and be more CPU bound.

Having had a chance to re-visit a few campaign missions, I can attest to the fact that the performace decreas is practically unnoticeable. CoH plays & looks great, same with the couple of 1v1 AI scrims I played.

If this was an FPS, the performance hit would be grossly apparent, but outside of the benchmark dropoff, I couldn’t be happier with the DX10 patch.

For those that have the hardware, play one or two of the night missions to appreciate the improved lighting & shadows.

Pretty sad framrates from the latest hardware.