Dual core?

I’m looking at buying a new computer and I thought I had this chip thing down more ghz (now mhz) means better chip. More FSB means better chip. Now however there’s a new player on the block the dual core.

My questions about this fabulous new dual core are

  1. What makes it better, I know the ghz is how fast it processes and the fsb is how fast it can swap things in and out of RAM. How does the dual core make it better.

  2. Why does it cost more for a dual core chip that has less ghz and the same fsb (yes I know what they stand for)?

  3. Does dual core run games better?

  4. Will my dual core be a good decision or will it rank up there with my rambus ram?

It’s 2 CPUs in one. So that’s twice the processing goodness in theory, though in practice the gain is never quite double that from one processor.

It’s 2 CPUs that happen to be stuck together siamese-twin style. That’s gonna cost more to make than 1 CPU.

If the game is multithreaded and CPU bound, then yes a dual core CPU would run it better. I’m not a game programmer so I have no idea how common multithreading is in modern games.

It’s a little early to be adopting dual core right now. No real point to it except for folks who must have the latest greatest gadgets.

… or, who have SMP applications and don’t want to get a 2 or 4 processor board. Like myself.

For gaming, most code isn’t really multi-threaded, and for that matter, it’s rarely CPU bound, so dual core is not likely to help for current or near-future games.

It would be useful for various apps and/or if you like to multitask.

Near-future games maybe, but multiple-cores are what’s coming. Every console will have them and in a year or two so will every new desktop. If I were developing a consumer app or game for 2007 release I’d use it.

I think it’s pretty safe to bet that with the Xbox 360 and PS3 having multiple (more than 2) core/CPU systems, developers of games will have to learn how to take advantage of them or be shown up by others who can. People like Epic, who are putting out titles on both are likely to bring their multi-core improvements over for their engines to the PC. So, yeah: near-term not really helpful, long term probably very helpful.

I’d say don’t get one now, get one later.

I’d say don’t get one now, get one later.

Yeah once the price of the 4400 is 250 I’ll snag one. Should be about xmas or so. I dont think I have ever paid more then 250 for a chip and that has done me good every time.

Sounds like a good plan. I’d agree with that strategy.

Actually, multicore will have a larger impact on overall game performance than people really consider. If you have a normal Windows environment, you probably have applications and background tasks running when you launch a game. With multiple threads, your game can run on its own thread and not be slowed by other things happening on your PC.

Anyway, the whole industry is headed that way, regardless…

Also, just for the record, MHZ/GHZ haven’t mattered for quite awhile now. AMD processors run slower than Intel Pentiums because of a smaller pipeline. Smaller pipeline equals less MHZ, but more instructions done per clock cycle. Same story with something like the Pentium M. Real world application benchmark testing is about the only way to really know which processor is better, and then it really depends on which application you intend to be using the most to reap the benefits.

K

Carmack’s not big on dual, triple or even multiple cores.

Read the transcript of his QuakeCon speech this year. He’s dealing with it, because he has no choice, but if John had his way, I think he’d much rather have a single processor that’s simply faster and faster.

http://forums.gaming-age.com/showthread.php?t=59336

–Dave

Wow thats like fucking revolutionary…

That’s lazy, is more like it. :roll:

Games over the next year, year and a half are going to start supporting multiprocessors in a big way because of the new consoles coming out. The nice thing is that buying a multi-core chip isn’t just some kind of game upgrade – as has been mentioned earlier, it will benefit a lot of different things. (Multi-threaded compiles in VS 2005? Sign me up! :D )

Yeah, yeah, Carmack the lazy bones.

He’s merely concerned that, at this time, parallelism in gaming applications won’t be as good as what can be done with a fast out-of-order core. He always wants the “best” solution-- that’s what drives his comments on stuff like this, and ragdoll physics being a gimmick, and whatnot.

If chip manufacturers had their way, they would too. The problem is the laws of physics dictate otherwise. There’s still probably plenty of room to eventually crank up clock speed, but processing costs have to come down. May also have to move off of Si entirely to more esoteric (and expensive) semiconductors.

Until those processes come down in price, consumers aren’t going to pay for chips that offer 4.5 GHz clocks at 3+ times the price of the current ones with 3.8 GHz clocks, so we get dual cores. (And parallelism is a pretty good bet for increasing performance in complex applications, so hopefully DC chips will force the computing space that way. We’ve essentially far exceeded the need for processing power in non-complex applications a while back.)

Coming from Carmack, who spooges over stencil and volumetric shadows, bump mapping, and “megatexturing”, this is stupidly hypocritical. I could give two shits less about some of the graphics direction in Doom 3, but I would take ragdoll physics over them. In fact, I’d take any kind of nice physical simulation over anything that makes my graphics look like it came out of a Playdoh set.

Well, of course, because most games aren’t written as multithreaded apps right now. The point is that the days of “free” incremental speed upgrades are over, and actually have been for a little while now. Software itself has to change now in order to get a performance gain, and there’s no reason that that shouldn’t happen (or have started to happen) now.

Well, of course, because most games aren’t written as multithreaded apps right now.[/quote]
Because game applications are pretty damn unsuited to multithreading. You’ve got rendering, and that’s already taken care of by the parallel GPU. You’ve got sound mixing, which isn’t so CPU intensive that putting it on another core will give you significant extra cycles. You’ve got standalone physical modeling of stuff that you’d have to isolate from everything else in the game world, or else the smoke and mirrors show. You’ve got…speech recognition. You can do frame buffer post-processing.

What it comes down to is that there’s just so many things that depend on an accurate picture of the current game state that parallelizing game logic too often creates more problems than the benefit of the solution for speed provides.

Multiple AI threads, you’re going to have two or more entities that place themselves in the same location. Or my buddy superflys running in place stuck on each other. A whole new meaning to race condition. And you could use a more advanced AI to solve special cases, but then if all that calcualting pegs those cores what have you really gained?

I’m sure there are solutions that can be custom-built, and they certainly will be (at great expense), but in the general case it’s not solved and may not be for a while. Carmack’s philosophy seems to be that he really likes the general case to be comparatively powerful to something implemented with a lot of gimmicks and workarounds.

What I really don’t get was his dismissal of facial animation. It seems to me that faces are faces, thus they have a “solution”, and that the facial animation in Half-Life 2 was a pretty nifty and added a lot to the game. I think he’s wrong here and that they should be putting forth the effort to put that into the engine.

Carmack is complaining about the Xenon and Cell processors using in-order processing. This does not have any effect on Intel or AMD class chips that are multithreaded or multicore. (Except for Xscale)

The bottom line though, is that while everyone can complain about parallelism, the entire industry is moving in that direction. There isn’t any other choice except to be stubborn and risk obsolesence. And while Carmack complains about it through his entire article, in the end he admits that his future development work for the next six months is on the 360.

K

You assume that everything needs to be correct. There’s nothing wrong with using data that’s one frame old for a lot of things.

Games can be multithreaded in a pretty significant way. It’s just that very few people have made the effort so far.

How about data that’s twenty frames old? Because when the level designer puts in a scene where the poly budget is being taxed and three entities happen to throw fancysplode™ grenades at the same time, that’s what’s going to happen. And then when things get back to normal, if a couple of ragdolls on separate threads end up having their legs going through each other, they’re going to fly off in some weird direction with all their other limbs flopping like epileptic fish around the nexus of their fused legs.