Twilight of the GPU: an epic interview with Tim Sweeney

Thoughts?

Yeah i should mention IBM CELL ,and the intention of SONY of having two CELL’s on the PS3…
Some people dont want to touch the PS3 even with a stick, imagine that with two CELL’s .

Sorry for my english.

The CELL is a bad example of what Sweeney is talking about if that’s what you are getting at. It’s not an homogeneous processor with it’s SPU and PPU cores.

Pretty interesting article but I think it’s going to take a lot longer than he seems to be forecasting. I also imagine some pretty big stumbles along the way. The days of similar looking games would at least be reduced, since things like voxel engines and other oddities become a possibility again.

He also seems awfully excited about the prospect of creating more work for the programmers to do, but they’re probably also thinking about how this would help them as a middleware vendor. This kind of architecture would definitely appeal to Epic when other companies are going to be looking at either writing an entire rendering system from scratch themselves, using a now-obsolete API like DirectX or OpenGL, or the handy-dandy already-multicore-optimized-and-enhanced Unreal Engine 4…

I put CELL as an example of ‘change’.CELL is a little different from others processors, and no one like it. No one likes changes :D.

Sorry for my english.

I don’t know about Tim’s work, but I was pretty impressed with Intel’s Quake Wars raytracing demo.

TS: That’s my expectation. Graphics APIs only make sense in the case where you have some very limited, fixed-function hardware underneath the covers.

My initial reaction here is that he’s full of shit, and that the real point of graphics APIs is to provide hardware abstraction. But I’ve never done any 3D programming, so what do I know?

But graphics has gone beyond APIs, and are now programmable.

Tim’s been beating this drum for a few years now. In one sense, he’s right, but it will take a few more years. But what it really means is that the functionality built into today’s GPUs will become a part of the CPU instruction set.

Current poster child: Larrabee. No one nows if it will be successful, but if it is, then Intel could build the Larrabee vector units into multicore CPUs.

But it’s not that CPUs will take over, but rather that CPUs and GPUs will merge.

Calling Tim Sweeney full of shit is about the last thing you want to do. The only person in the 3D engine biz who MAYBE is less full of shit is John Carmack.

Sweeney’s just saying that 3D graphics APIs up until now have been fundamentally built on a single shaded-triangle-based rendering model. Which is absolutely true – look at OpenGL or D3D and you see lots of CreateVertexBuffer and SetShadingParameter calls.

In the future, he’s saying – and I see no reason why he’d be wrong about this – that the hardware will all look like a bunch of shared-memory, vectorized processors, and the level of abstraction drops from abstracting over different triangle renderers to abstracting over different parallel execution environments for C++ code. Basically, the “GPU” as such goes away, and you wind up with graphics engines being written using the same kinds of vectorizing parallel libraries that scientific programming has been using for a while now.

It’s going to be an unbelievably exciting time for gamers. You think GTA4 had a big open world? You think Oblivion had a long draw distance? Those are nothing compared to what’ll happen once the tyranny of the triangle gets overthrown. Voxels make for GREAT destructible environments, too – imagine Mercs 2 where you can blow colossal holes in the actual landscape…

If the graphics hardware is a general purpose processor then no, you don’t use a graphics API. You still need some sort of OS-level API to send and receive code and data to/from the device, control and track execution, and stuff like that. But that API is not talking in terms of graphics concepts (textures, vertices, triangles, transforms), so it’s not a graphics API. Of course, you can build graphics APIs, libraries, engines and middleware for it.

I personally don’t see the death of the hardware rasterised triangle primitive coming because it is so convenient and fast… but I guess it’s possible.

Why would they though? It’d be little different than the current situation with Intel GMA chips: Extra silicon that costs more and is only useful to some subset of users. A far better idea would be a two-socket solution with fast interchange ability. One socket holds a multi-core CPU (where, realistically, the SIMD capabilities vis. SSE have actually been pared back somewhat to reduce die size and save cost) and one multi-core GPU (i.e. massive SIMD capabilities, less robust multi-stage pipelining and high clock speed instruction cores). They’re both “CPUs” and all, but realistically they’re still different function coprocessors. Has anyone suggested that somehow replacing a Core 2 Quad with multiple (16+?) pure Larabee cores would somehow be a good thing?

I think you mean “graphics whores”. I seriously doubt gameplay is going to improve much. It rarely does.

Besides which, unlimited freedom has a strong tendency to settle down into best practices… and best practices get instatiated into reusable toolsets… and we’re right back to programming to an API again.

I think you mean “graphics whores”. I seriously doubt gameplay is going to improve much. It rarely does.

I dunno, I might disagree with that. The less programmers have to fool around with rendering code the more time can be spent elsewhere. A unified abstraction layer/language would make that possible. Theoretically. Especially one that runs on PC, XBox, PS3, etc.

Enh, I side with Carmack: graphics and gameplay aren’t nearly as separate as grognards claim. I’d rather play Oblivion than Ultima 7, even if the basic RPG gameplay and world scale is similar. If that makes me a graphics whore, then I’m SPREADING MY LEGS FOR SWEENEY, BABY.

Eventually the best practices will settle down into a whole bunch of hopefully compatible APIs. I think something like SpeedTree is going to be the model here – there will be all kinds of rendering middleware modules that need to have some way to cooperate in creating the final scene. VoxelScape + SpeedTree + Havok + Euphoria + CharacterMaster = a mix-and-match bunch of middlewares that all have to coexist. Getting rid of the underlying graphics triangle constraints will explode the variety of these things.

You mean like the simpler graphics on the Wii? All that freed up time has really resulted in some gems.

You’re thinking like a graphics guy. Those vector units are also useful for GP-GPU stuff, which will become increasingly common. They’re just the next phase in SSE, really.

Then again, maybe I’m thinking like a CPU-centric guy ;-)

If you can put it all on a single die at 32nm or 22nm, why use multiple sockets? Another possibility is heterogeneous cores.

The CPU guys are looking for functionality to integrate into future cores.

You mean like the simpler graphics on the Wii? All that freed up time has really resulted in some gems.

I know you’re just snarking here but my understanding is that the Wii isn’t all that easy to develop for, plus you sort of have to work the Wiimote into your design or it doesn’t really count as a Wii game. Those obstacles would make it more difficult.

Hell, I don’t have any Wii development experience but the fact that the graphics look simpler might not mean anything. It might be a horror trying to get anything to appear on the screen, I dunno. That would totally defeat any perceived benefit.

No, graphics have ALWAYS been programmable; it’s just that the last 10 years have seen graphics done in a heavily constrained black box via APIs. We did have 3D graphics before 3DFX came along, and we had 3D games long before Doom, Quake, and Unreal. All of those flight and space sims that you played on your (or your older brother’s or you Dad’s) C64, Amiga, and Atari computers were in full 3D, and the graphics were programmed and rendered without the aid of a GPU and its API.

Heck think about all of the award winning games prior to the '98 timeframe… all of the Microprose flight sims, Falcon (1,2, and 3), Wing Commander, Strike Commander (not really award winning, but the first to do texture mapping), X-Wing/Tie Fighter, and others. You’ll notice something interesting though; no first person shooters in that list. That is what we gained from Doom/Unreal and the subsequent introduction of GPUs. We (well you guys, not me… I couldn’t code 3D graphics if my life depended on it) got the ability to render the detailed “in your face” scenes necessary to do ground level first person perspective.

… and for the record, just because at least one game dev will see this post, and because it has bugged me for about 15 years… Doom was NOT a 3D game!! Doom was at most 2.5D. Don’t believe me? Load up an old copy of Doom and shoot one of the monsters in the back.

… and for the record, just because at least one game dev will see this post, and because it has bugged me for about 15 years… Doom was NOT a 3D game!! Doom was at most 2.5D. Don’t believe me? Load up an old copy of Doom and shoot one of the monsters in the back.

I’ll do you one better. All games are 2D. Your monitor is a flat, 2D plane.

You might want to walk down the hall and talk with Tim. I didn’t get that from the interview. I got that he is proposing less abstraction, and a return of control the the programmer. Too much abstraction leads to a brain-dead framework that is an aggregate of the least common denominator across all platforms, and it constrains you to the vision and capabilities of the one who develops the abstraction.