Technical graphics question

Ok, so I have a Radeon 9800 Pro 128, 1 gig of ram and a 1.7 ghz celeron (that ciparis talked me into upgrading to a p4 2.4, and will be upgraded this coming thursday). I noticed that not only in planetside, but also in Unreal Tournament 2003 I get 30 FPS. This should not happen, as far as I can tell.

I’m hopeing that the upgrade from the celeron to the P4 2.4 will improve this drastically, but is there anything I can do, especially in the bios, that will help?

I believe I have my aperture (or whatever it is) set to 64meg, in fact I know it is, but is there anything else?

-Jason

It’s the Celeron.

With the Radeon 9800 Pro and your Celeron, you could turn up detail and get higher resolution without affecting your framerate too much. On the other hand, lowering detail won’t improve it because your Celery won’t be pushing enough polygons anyway.

Correct me if I’m being a retard, but don’t most DX8/9 et al API calls (optionally) offload almost all of the geometry work (transform and lighting) to the GPU on the video card? I’m pretty sure most cards since the GF1/Radeon days do the actual geometry transformations, although I’m not sure to what extent developers are using that functionality.

Feature promises from video card manufacturers are like political campaign promises.

Yes, hardware T&L does work to an extent. No, it’s not going to replace your CPU. The vast majority of work is still performed by your processor.

But if T&L is supported in DX 8/9, should it just be a bit flag somewhere if the video card drivers support it? I’m not sure why the burden would be on video card manufacturers if DX supports it directly; I’d say it’s a game developer decision if it’s being underutilized.

It’s one of those weird paradoxes. The more the game developers rely on new graphics card features, the more you need a fast CPU.

For example, if you balloon up the polygon count, the GPU can do a great job of handling the T&L calculations. But you’ve also substantially increased the collision detection math needed – which is done by the CPU.

Also, these graphics rich titles use more CPU for physics calculations as well as added AI – and if you have more objects, creatures and AI people running around that takes… more CPU.

Also, unreal engine games are heavily cpu bound. Without a fast processor, you just aren’t going to max out the videocard. As for planetside, don’t expect a miracle, instead of getting uber-framerates, what you get is a consistant framerate without drops.

Make sure you’re using the latest ATI drivers, and keep in mind their rather exacting installation instructions thereof.

You might actually want to make the aperture smaller, as the GART table gets huge with a large aperture–leading to the occasional glitch with ATI cards. But this feature is usually only necessary when you’re actually troubleshooting and likely won’t produce a significant performance boost.

I guess what I’m afraid of is that I’ll buy a new processor, which is already on its way, and I won’t see a framerate boost. if that’s the case then I’m going to shoot my computer with a 12 gauge.

Make sure you turn off vsync in D3D and OGL, otherwise you’ll never be able to exceed your refresh rate. Eg 60hz = 60fps, 70hz = 70fps, etcetera.

That’s unlikely. Planetside may not boost much, but you should see a gigantic improvement in UT.

Also, the P4 Celeron is absolutely ass-tastic. You could probably upgrade to a K6-3 and see a performance improvement from that crappy POS.

The P4 didn’t really get “good” until Intel added 512kb L2 (with the 2.0ghz northwood models). Remember the original P4s had 256kb L2 and were generally outperformed across the board by Athlons. So you can imagine how hideously poorly the 128kb L1 cache version of the chip performs.

In summary: P4 Celeron = ass.

Well, I mean, that’s what I’m doing. I’m leaving the Celeron and getting a straight up P4. So, uh, yeah?

For any processor, all you need to do to determine whether you’re CPU or GPU-bound (i.e. replace the processor or the graphics card) is plot the dependence of your framerate on graphics resolution. If your 640x480 framerate is the same as your 1600x1200 framerate, you are totally CPU-bound. OTOH if your framerate is directly related to resolution (though it’s possible to be CPU bound up to a point and GPU-bound from that point onwards), you need a better graphics card - to wit, crank up the resolution, AA and the Anisotropic Filtering enough and I’m sure you could get your system GPU-bound.

With a 9800 pro, I think we can safely rule out the videocard being the issue.

Yes. My framerate stays the same in planetside with all options on and turned up to the highest settings at the highest resolution I can handle.

I am no tech expert, but I can guarantee that will not help your frame rate. And as a total stab in the dark, it may even go down.

I am no tech expert, but I can guarantee that will not help your frame rate. And as a total stab in the dark, it may even go down.[/quote]

What!

Don’t you know violence solves everything when it comes to computers?

Of course, even I must admit that shooting your computer with a 12 gauge might be going a bit too far. What you need to do is to throw the case into the wall, and then threaten to shoot it. You’d be amazed what a little tough love can do to a computer’s performance :D

I’m running a 2.4GHz P4, a Radeon 9800 256MB, and 1GB of RDRAM as my home machine, and I get consistently decent framerates in Planetside. I don’t jack the detail all the way up, but I do have 2x AA on at either 1024x768 or 1280x1024. I never get videolag in the game.

I don’t play UT, but on the same system I was able to play Battlefield 1942 at 1600x1200 with no AA and only occasional videolag.

Alright, just ordered an AMD 2400XP 266MHz processor to replace my AMD 1600XP. Yeah, the AMD 2600XP is also a 266MHz processor, but its not available in large numbers so it costs a hell of a lot more.

This is with my GeForce 4 Ti4200 and 1 gig of RAM.

Next week I should be able to see how much better Planetside runs.