I had a look at Crysis on a high spec machine this weekend and I was wondering just how it would be possible to improve on the engine both graphically and in terms of physics and make it more hardware intensive?
Are we reaching a point of diminishing returns? Will the cost of a high end next gen GPU justify the improvements we see?
Sure we can start running at higher and higher resolutions but what would the point be?
If you want to see where graphics “could” go, watch a Hollywood blockbuster. Almost all the special effects are computer generated. Hence, if hardware can continue to improve, I theorize it possible to make movie level graphics playable in games.
I dunno if it’s the fault of the engine, Vista, or space aliens, but after about an hour Crysis seems to bog down and I have to restart it to get the framerates i get when it just starts up. Other than that it’s pretty awesome.
There are two areas where most engines still do minimal things (primarily because of rendering power) that would yield huge increases in visual quality if implemented to larger degree: Volumetrics and tesselation. True volumetrics would give you something closer to movie-quality smoke, light, fire, water, etc. Spline based tesselation would effectively remove the need to have discrete levels of detail, and a continuous tesselation algorithm would conceivably yield the “real” situation of progressive refinements of objects with lowering distance, rather than the whole popping into existence/vertex popping currently seen.
Both of these things are tremendously computationally intensive (volumetrics generally include an entire physics subcomputation as part of the render process, for example; tesselation is still essentially an unsolved problem that last time I looked requires progressive iterative solutions for optimal quality), so I wouldn’t expect to see them any time soon. It’s a lot easier to fake things reasonably convincingly with the hardware we have today and will continue to have for the near future than it is to do the problems honestly. But an honest solution would tremendously improve visual fidelity.
(Special mention also goes out to procedural texture/geometry, of which we’re beginning to see a bit in Crysis but it’s still pretty rudimentary.)
3 big areas off the top of my head that still need work and are actively being worked on by various groups:
Lighting is still generally a cheap hack, loads better than it was in the days of simple vertex lighting or lightmaps but nowhere near the real-time ray bouncing full-radiosity shadow-correct stuff we will start seeing in the relatively near future.
Animation still sucks across the board and the better everything else gets the more obvious this is due to the uncanny valley effect. AI and physics based animation systems like endorphin are beginning to fix this but they still have a ways to go.
(The option for) fully unique texturing on every surface will help a lot with some environments.
Forget that. There is something still more basic that games need. Lightning.
Games uses premade shadow maps + some basic real time lightning. They need to reach a more realistic lightning system, like Global Illumination (GI). And just for that they need much more powerful computers, like 10x more.
Global Illumination is a scene-wide volumetric texture effectively. The radiance equation includes all the crap volumetric textures need. Conversely, no volumetric texture exists without some way of modelling the light that goes through it.
My favorite quote on the subject (wish I could remember who said it) is: “I can’t wait until games are completly photorealistic so that developers can start spending more time making them fun instead of on the graphics engine.”
Oh yeah, because nothing will advance non-linear gameplay like staying with human generated IF THEN blocks, yeah, that’s the future of gamplay. Yeah, those twenty million stars in Elite II, those were exciting.
Advances in technology are what we need, and will keep needing, to keep gameplay rolling forward. There’s a reason we’re not all sitting around playing 80s “classics”.
While I don’t necessarily disagree, I’m not sure that ever more vigorous physics is likely to give you what you want. Unless by physics you mean things like “numerical models of population and social dynamics”. (Those would be interesting and could lead to a lot more depth in games, but they’re still pretty hard problems at an applied academic research level since coming up with models that quantify everything in the universe that’s important as a few key variables is a non-trivial matter.)
None of this, of course, has anything to do with the Crysis engine.
More processing power will allow you model more complex scenarios, be they AI, worlds, physics, whatever.
If Crysis were as good as it were going to get I’d pack up my mouse right now. Sure, it looks nice, but it’s not that much more complex than Doom. It looks a lot better than Doom, but it’s no where near the real world. There’s aliasing, you have a tiny degree of vision but no edge blur, the textures are low res compared to what we can see, there are waves of antistropic filtering, etc. etc. Hell, the AI doesn’t even learn. If I don’t see learning AIs which report back to a central database and update client copies in the next ten years… I’ll be very disappointed. Valve are collecting all those stats on player deaths and the like, I want to see game AI doing the same!
I didn’t read this whole thread, but the point of diminishing returns is near.
If you want to go towards Hollywood type special effects, you’re budget would have to be insane. Think about the mount of modeling and texture work that already has to go into games. I think detail and budget will be the limiting factor fairly soon - not hardware.