Jump to content

Voxel Cone Tracing


Josh

6,233 views

 Share

I've begun working on an implementation of voxel cone tracing for global illumination. This technique could potentially offer a way to perfrorm real-time indirect lighting on the entire scene, as well as real-time reflections that don't depend on having the reflected surface onscreen, as screen-space reflection does.

I plan to perform the GI calculations all on a background CPU thread, compress the resulting textures using DXTC, and upload them to the GPU as they are completed. This means the cost of GI should be quite low, although there is going to be some latency in the time it takes for the indirect lighting to match changes to the scene. We might continue to use SSR for detailed reflections and only use GI for semi-static light bounces, or it might be fast enough for moving real-time reflections. The GPU-based implementations I have seen of this technique are techically impressive but suffer from terrible performance, and we want something fast enough to run in VR.

The first step is to be able to voxelize models. The result of the voxelization operation is a bunch of points. These can be fed into a geometry shader that generates a box around each one:

void main()
{	
	vec4 points[8];
	points[0] = projectioncameramatrix[0] * (geometry_position[0] + vec4(-0.5f * voxelsize.x, -0.5f * voxelsize.y, -0.5f * voxelsize.z, 0.0f));
	points[1] = projectioncameramatrix[0] * (geometry_position[0] + vec4(0.5f * voxelsize.x, -0.5f * voxelsize.y, -0.5f * voxelsize.z, 0.0f));
	points[2] = projectioncameramatrix[0] * (geometry_position[0] + vec4(0.5f * voxelsize.x, 0.5f * voxelsize.y, -0.5f * voxelsize.z, 0.0f));
	points[3] = projectioncameramatrix[0] * (geometry_position[0] + vec4(-0.5f * voxelsize.x, 0.5f * voxelsize.y, -0.5f * voxelsize.z, 0.0f));
	points[4] = projectioncameramatrix[0] * (geometry_position[0] + vec4(-0.5f * voxelsize.x, -0.5f * voxelsize.y, 0.5f * voxelsize.z, 0.0f));
	points[5] = projectioncameramatrix[0] * (geometry_position[0] + vec4(0.5f * voxelsize.x, -0.5f * voxelsize.y, 0.5f * voxelsize.z, 0.0f));
	points[6] = projectioncameramatrix[0] * (geometry_position[0] + vec4(0.5f * voxelsize.x, 0.5f * voxelsize.y, 0.5f * voxelsize.z, 0.0f));
	points[7] = projectioncameramatrix[0] * (geometry_position[0] + vec4(-0.5f * voxelsize.x, 0.5f * voxelsize.y, 0.5f * voxelsize.z, 0.0f));
	
	vec3 normals[6];
	normals[0] = (vec3(-1,0,0));
	normals[1] = (vec3(1,0,0));
	normals[2] = (vec3(0,-1,0));
	normals[3] = (vec3(0,1,0));
	normals[4] = (vec3(0,0,-1));
	normals[5] = (vec3(0,0,1));

	//Left
	geometry_normal = normals[0];
	gl_Position = points[0];
	EmitVertex();
	gl_Position = points[4];
	EmitVertex();
	gl_Position = points[3];
	EmitVertex();
	gl_Position = points[7];
	EmitVertex();
	EndPrimitive();

	//Right
	geometry_normal = normals[1];
	gl_Position = points[1];
	EmitVertex();
	gl_Position = points[2];
	EmitVertex();	
...
}

Here's a goblin who's polygons have been turned into Lego blocks.

Image21.thumb.jpg.33da5584d0dde415940a5e07ed1d198e.jpg

Now the thing most folks nowadays don't realize is that if you can voxelize a goblin, well then you can voxelize darn near anything.

Image1.thumb.jpg.4a9ab12e8d1c6ee6163b03575608c42c.jpg

Global illumination will then be calculated on the voxels and fed to the GPU as a 3D texture. It's pretty complicated stuff but I am very excited to be working on this right now.

Image2.thumb.jpg.fd1dd57ae38eded5c456fdff84d84ef2.jpg

If this works, then I think environment probes are going to completely go away forever. SSR might continue to be used as a low-latency high-resolution first choice when those pixels are available onscreen. We will see.

It is also interesting that the whole second-pass reflective water technique will probably go away as well, since this technique should be able to handle water reflections just like any other material.

  • Like 4
 Share

3 Comments


Recommended Comments

This is so cool looking. I remember this presentation of a game developer on college where a similar technique was used for the game Delta Force.  How terrible is the performance when doing this on the goblin? Have you ever thought about using a technique like this for the terrain? No idea if it is a valid technology, but terminology like voxels often make it to the screen when the terrain topic.

Link to comment

The time it takes to voxelize the goblin is pretty long, but it doesn't matter because it is only done once. I just voxelize an object and then add it into an octree every time it moves. Animation will be a challenge, but I have an idea of how to do it.

The voxel shader looks cool, but it's really just to visually confirm that things are working! Since it generates a cube for each vertex, you could even run the goblin with animation, although the cubes would not be perfectly aligned to a grid.

Link to comment

Some quick calculations:

  • (256*6)x256x256 volume texture with mipmaps using DXTC1 compression = 64 mb.
  • 0.125 voxel size = 32 meters range for first stage.
  • Next stages are 64, 128, 256, 512, 1024...
  • 64 mb * 6 stages = 384 mb for GI textures.

So it looks like everything is reasonable. I don't think this will support low-latency high-resolution reflections unless you use a smaller voxel size and fewer stages. Maybe it would work with one small room.

Link to comment
Guest
Add a comment...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...