Please give us an overview of how the algorithm works from generating the octree, to cone tracing, to the gathering pass.
The technique is known as SVOGI – Sparse Voxel Octree Global Illumination, and was developed by Andrew Scheidecker at Epic. UE4 maintains a real-time octree data structure encoding a multi-resolution record of all of the visible direct light emitters in the scene, which are represented as directionally-colored voxels. That octree is maintained by voxelizing any parts of the scene that change, and using a traditional set of Direct Lighting techniques, such as shadow buffers, to capture first-bounce lighting.
Performing a cone-trace through this octree data structure (given a starting point, direction, and angle) yields an approximation of the light incident along that path.
The trick is to make cone-tracing fast enough, via GPU acceleration, that we can do it once or more per-pixel in real-time. Performing six wide cone-traces per pixel (one for each cardinal direction) yields an approximation of second-bounce indirect lighting. Performing a narrower cone-trace in the direction of specular reflection enables metallic reflections, in which the entire scene is reflected off each glossy surface.
[Editor’s Note: If the above sequence seems alien to you, it’s because it is. Global Illumination requires a totally new lighting pipeline. In a traditional game, all indirect lighting (light that is bounced from a surface) is calculated in advance and stored in textures called lightmaps. Lightmaps give game levels a GI-look but since they are pre-computed, they only work on static objects.
In Unreal Engine 4, there are no pre-computed lightmaps. Instead, all lighting, direct and indirect, is computed in real-time for each frame. Instead of being stored in a 2D texture, they are stored in voxels. A voxel is a pixel in three dimensions. It has volume, hence the term “voxel.”
The voxels are organized in a tree structure to make them efficient to locate. When a pixel is rendered, it effectively asks the voxel tree “which voxels are visible to me?” Based on this information it determines the amount of indirect light (Global Illumination) it receives.
The simple takeaway is this: UE4 completely eliminates pre-computed lighting. In its place, it uses voxels stored in a tree structure. This tree is updated per frame and all pixels use it to gather lighting information.]
The demo reel wasn’t too impressive to me(a layman with computer graphics) until it showed the outdoor environment, but the real-time demo with the developer controlling things was kind of mindblowing
I’m sure others can answer this in more detail/accuracy, but I’ll take a stab at it.
The basic difference is, although Doom 3 had realtime lighting, it was not global illumination. Doom 3 used “shadow volumes” which conceptually you can picture as a “cone of darkness”. A light source was defined and then objects would cast shadows. The shadow volume is a 3 dimensional object within which the lightsource doesn’t shine. The engine determined if objects were inside the shadow’s volume or outside of it. Objects inside the volume were not lit up by the lightsource. Objects which were not in the cone of darkness of a lightsource were lit by that lightsource. The Wikipedia page has a screenshot showing a good illustration of shadow volumes.
With global illumination light travels from a light source, hits an object, lights up that object, and then something happens that didn’t happen in Doom 3. The light bounces off that object and thereby lights up other objects even if those other objects are not in direct line of site of the light source. This more closely matches reality and thus looks less fake.
How close UE 4.0 comes to real global illumination and how much it “fakes it” are open for debate but the fact it gets the results we’re seeing in real time is big step forward.
Ambient occlusion is my favorite new graphics technique in high-end games, and Wikipedia says it’s a crude approximation of global illumination. So I’m all for this.
Particles seem to be the big new thing too. There were a lot of them in the Final Fantasy demo.
Alan demos that at one point. He turns off the global illumination and you see the direct (aka Doom3) lighting where only things hit by the light are lit up. He turns turns it back on and you see the results of the bounce.
I got a demo on the Thursday morning of E3 and it was simply jawdropping when he showed what he can do in the editor. A lot of the technical art stuff was lost on me to be honest but I was dribbling at the possibilities for very rapid iteration and prototyping that the editor now offers.
Being honest I liked more the Square engine video than the Epic engine video. But of course the cause is clear: the Square video apart from solid tech, had much better art direction and real artist work (and storyboarded and directed) than the bland bland “evil guy with lava” in the Epic video.
Yeah, the cinematic video didn’t do much for me while the editor demo was incredibly awesome. But after getting a better grasp of the tech behind it the cinematic feels more impressive than it did at first viewing.
I don’t know too much about it, so this may be totally off-base, but don’t publishers complain that games cost too much to develop already? And doesn’t creating the art assets that take advantage of each new generation cost more than the previous generation? I guess I’m wondering if developers/publishers really want a new generation of visual fidelity that drives costs up again.
Publishers don’t want it, but consumers crave for it.
In fact for the prizes I am seeing relating to the E3, the games that hold attention to most people are the most spectacular graphically speaking, be exclusive (Last of Us, Beyond, Halo 4) or multiplatform (Watch Dogs) with also lots of comments about Star Wars 1313 and the Square Enix engine demo.
Publishers and MS/Sony will make the jump because the sales of the last year are in decline comparing with the lifetime sales of the consoles, they need to improve and make a graphical jumpa gain to reactivate the interest of gamers.
The idea is that the tools also get upgraded each go around so the content isn’t necessarily a linear scale in terms of difficulty of creation. UE4’s editor is being built with lowering iteration times in mind … compiling source on the fly, new Kismet 2 stuff, fully dockable and customizable interface, etc.
Most skills that artists have now will translate over and it should end up being LESS work to be honest (in some areas, at least) since the lighting is dynamic. No more lighting rebuilds or setting up of lightmap UV channels. Just create the asset and import it into the editor.
Ah, that’s what you meant by Square. I didn’t make the connection until now. I kept thinking of Square-Enix, the publisher, not Square, the developer of Final Fantasy.